Kinetics (RWTH Aachen University) PDF

Document Details

FelicitousThermodynamics

Uploaded by FelicitousThermodynamics

RWTH Aachen University

null

Denis Music

Tags

kinetics materials chemistry thermodynamics physics

Summary

This document is a compendium of topics from the Materials Chemistry II course at RWTH Aachen University, focusing on kinetics. It covers various concepts and derivations with examples for different topics such as Statistical Methods, Thermodynamics, Ideal Gas, Applications of Kinetic Theory, and more.

Full Transcript

Kinetics KINETICS This compendium covers a selection of topics given in the course Materials Chemistry II at RWTH Aachen University. It is organized as follows. Derivations and important concepts, given in bold, are discussed at length, followed by examples to illustrate importance of these i...

Kinetics KINETICS This compendium covers a selection of topics given in the course Materials Chemistry II at RWTH Aachen University. It is organized as follows. Derivations and important concepts, given in bold, are discussed at length, followed by examples to illustrate importance of these issues. Please note that the equations used in these examples are not numbered, which means that they are less relevant. Every chapter contains a list of key ideas to help students in the exam preparation. One essential general comment can be made about this compendium in terms of “book-keeping” in order to minimize the confusion: here we count particles and once we deal with concepts from chemistry moles are preferred. This doesn’t really change anything in terms of functional dependences, but it is of course significant when we want to come up with numerical solutions, e.g. 1 eV/atom = 96.49 kJ/mol. Let’s be careful! Topics covered in this compendium are: 1. Statistical Methods 2. Thermodynamics 3. Ideal Gas 4. Applications of Kinetic Theory 5. The Maxwell-Boltzmann Distribution 6. Chemical Kinetics 7. Chemical Equilibrium It is expected that the basic thermodynamic concepts, such as the laws of thermodynamics, are already known by the students. Constructive comments on this compendium and the course in general are very welcome at any time. Please don’t be timid to share your thoughts in person or send an e-mail to [email protected]. Sincere gratitude to everybody who already provided constructive criticism/suggestions! 1 Denis Music Kinetics Fig. i.i Li diffusion into spinel LiMn2O4 cathode in a lithium ion battery. Often, these batteries are nanostructured. Blue, red, and purple spheres represent Li, O, and Mn atoms, respectively. The left image shows the state before interaction (Li diffusion) and the right one after 2500 fs. Let’s start with some general motivation for kinetics. Why do I need this course or this particular subject? Let me provide a possible answer. We all heard of nanotechnology, right? It should fix many open issues in physics, chemistry, materials science, medicine, computer science, ecology, sports, etc. Whenever it comes down to targeting small objects (e.g. ICs, human cells, …), nanotechnology kicks in. Many physical and chemical properties are altered by this enormous size change. Also, more objects or even many individual particles (see Fig. i.i where Li moves into a cathode in a battery) are important instead 50 kg of a bulk piece. Among other things, we need statistical tools to tackle atomic processes. For instance, what would be an average speed of Li (see Ch. 1 and Ch. 5), how does the process start (see Ch. 4), what’s their rate (see Ch. 6), etc? Hey, it’s not soooo bad to know something about this subject! 2 Denis Music Kinetics 1 Statistical Methods A very common issue that may puzzle many researchers or engineers is to interpret the experimental data. Most of readily available experimental methods endow with macroscopic averages (mean values) only. For instance, let’s say you measure mechanical properties using a technique called nanoindentation. What you obtain is a set of measured values that you average and describe the quality of your data by providing standard deviations. However, the real world is made of atoms! How do we explain what is going on if we deal with macroscopic descriptions? We are forced to correlate our macroscopic observables and underlying microscopic world. Even if we don’t care about understanding, which should not be the case here, we cannot effortlessly improve the performance of a certain product if we don’t have any clue what’s going on. We need to understand. In this particular case, statistical methods are required to link the macroscopic and microscopic world. Let’s get started! Fig. 1.1 An isolated system and a system with a thermal contact. We need to develop a common language so that nobody gets scared with mathematics. Many of things discussed here may sound abstract, but soon you will see that they are quite useful. Let’s consider something called a system. This can in principle be anything. Your imagination is as good as mine; let’s take a star, a table, a student, a stone, or a molecule. Moving on with complexity, we come to a concept of a statistical ensemble. First, a configuration of a system is a state this system can be in, a sort of an arrangement of the system. An atom is a system and this very atom can for instance be excited (still not ionized) so we would say it is just another 3 Denis Music Kinetics configuration of the same system. The statistical ensemble is a collection of all configurations that a system might be in. This is quite neat, right? We are able to follow everything our system can do. There are several kinds of ensembles and in this course we will consider only two, namely microcanonical (sometimes called molecular) and canonical. A microcanonical ensemble can be described as an isolated system, namely total number of particles (N), its volume (V), and total energy (E) are constant. Note that we here actually count particles rather than moles. Characteristic state function is entropy (S). On the other hand, a canonical ensemble can be described as a system in contact with a heat reservoir, namely N, V, and its temperature (T) are constant. Characteristic state function is the Helmholtz free energy (A). Illustrations of these ensembles are given in Fig. 1.1. The characteristic state functions can be used to obtain all thermodynamic properties, which will be done in the next lecture. In this lecture, we will work out the statistical issues. Let’s first check up some examples. Fig. 1.2 Ru-O-Nb nanorods (left) and computer simulation (right) showing the atomistic mechanisms relevant for their formation. RuO2 exhibits interesting transport properties and hence can be used in microelectric as well as thermoelectric devices (direct conversion of heat into electricity). Exciting new science and applications (boosting of properties) appear upon formation of nanostructures. One such example is given in Fig. 1.2, where Nb is incorporated into RuO2 giving rise to formation of nanorods. To understand this mechanism, computer simulations were carried out in a canonical ensemble since the growth temperature was approximately constant. Surface coarsening on the atomic scale occurs due to O 4 Denis Music Kinetics crosslinking of two neighboring NbO6 octahedra, which in turn contribute towards the experimentally observed formation of nanorods. Obviously, statistical methods are useful. Fig. 1.3 The ten configurations of a system with 3 particles and 3 equidistant energy levels. An example of two configurations with the same energy is highlighted. Let’s start with the mathematical description. We need to take care of all possible configurations. This is actually a difficult book-keeping task, but we will make our lives easier if we pull some mathematical stunts. Starting with an example is never a bad idea. Let’s imagine we have a system with N = 3 particles and 3 equidistant energy levels (see Fig. 1.3). These 3 particles can be placed in any possible way without restrictions. One may ask what particles and energy levels we are talking about. This should be understood as broad as possible. These energy levels can be atomic shells, translational degrees of freedom or anything that we can use to distinguish all configurations this system can be in. Also, the particles may be understood as e.g. electrons or atoms, but your imagination is welcome. Most of the time, we will really discuss atoms or molecules. Let’s assume that the ground state has the energy of 0 (n0 particles), the first level reaches ɛ>0 (n1 particles), and the second level 2ɛ (n2 particles). Figure 1.3 shows ten possible configurations. The first configuration (from left) can be achieved in only one way since the order of particles 5 Denis Music Kinetics at the energy level is irrelevant. Statistically speaking, we say that the weight of the first configuration is 1. In general, the weight (W) can be calculated as N! W  n 0 ! n1 ! n 2 !... (1.1) The weights for all configurations in Fig. 1.3 are 1, 3, 3, 3, 3, 1, 3, 3, 6, 1 (from left to right). Even in this very simple example, there are notable differences in weights (up to 6 times). This means that some configurations will be preferred as compared to the others. In a real system, as for instance air in a room, we have a lot of particles (multiples of Avogadro number) and hence a mess we cannot take care of by hand. We need to handle these enormous numbers somehow. A common recipe is to work with logarithms instead, so that Eq. (1.1) becomes ln W  ln N !  ln n j ! j (1.2) Furthermore, some approximations can be made. One such approximation valid for large numbers is the so-called Stirling’s approximation ln x!  x ln x  x (1.3) Hence, assuming the constant number of particles Eq. (1.2) can be rewritten into ln W  N ln N   n j ln n j j (1.4) The example in Fig. 1.3 is not under the constant energy constrain. Let’s now move on with the microcanonical ensemble and be careful about the physical interpretations. We assume that the ensemble can be described by the most dominating configuration as this exhibits the largest probability. Thus, we have to ask ourselves, what is the dominating configuration? Obviously, not all configurations are equally probable (bear different weights). Therefore, we are looking for the maximum of W or lnW. This is in principle a standard mathematical task (we just need the first derivative), but this is not a mathematical course. The variables describing a microcanonical ensemble have a physical meaning. We only have N particles and once we decide how many can occupy e.g. the ground state, the energy and further particle distribution is already constrained since it is an isolated system. Variables are not independent. Two constrains need to be introduced, making sure that the total number of particles as well as the total energy are constant, which can be written as 6 Denis Music Kinetics N  n i i (1.5) E   ni  i i Luckily enough, there are methods to deal with this conundrum as well. We can use the method of undetermined multipliers (after Lagrange), which allows for treatment of dependent variables as if they were independent. This cook-book recipe states that we should do our standard mathematics, but we need to add constrains, i.e. Eq. (1.5), with some unknown factors α and β, which we have to identify later on. Thus,   ln W  d ln W     dn i    dn i     i dn i  0 i  n i  i i (1.6) Note that we have chosen β to be negative for convenience, which will be cleared out soon. Let’s rewrite the first term in Eq. (1.6) using Eq. (1.4) as follows   n j ln n j  ln W  ( N ln N )   j n i n i n i (1.7) Let’s look more closely at the first term in Eq. (1.7). Since the derivative of N with respect to any n is 1, we have  ( N ln N ) N 1 N  ln N  N  ln N  1 n i n i N n i (1.8) The equivalent procedure can be applied for the second term in Eq. (1.7). The only way of satisfying d lnW = 0 is to demand that this holds for every i. Thus, the overall result is ni  ln     i  0 N (1.9) From Eq. (1.5) we can determine α. Then, Eq. (1.9) can be rearranged into ni e   i pi   (1.10) N  e   i i which is known as the Boltzmann distribution. This is the central result of this lecture. It is clear that this can be interpreted as probability p (population of levels). At this point it makes sense to reveal that 7 Denis Music Kinetics 1  kT (1.11) where k = 1.38 × 10-23 J/K (the Boltzmann constant). It is also apparent that we should take β to be negative in Eq. (1.6). Otherwise, everything seems clumsy and we should show Eq. (1.11) to be valid as soon as possible. The denominator in Eq. (1.10) is another extremely relevant quantity called a microcanonical (or molecular) partition function. The symbol q is used here (a more traditional one is Z stemming from German for Zustandssumme, see App. IV). Hence, we have q   e   i i (1.12) This can also be written as q g e i   i levels (1.13) where g is degeneracy (degenerate states exhibit the same energy). We can choose to sum the states or the levels, but the results are always the same. Let’s spend some time with interpretations by checking up the mathematical extremes of Eq. (1.12) or Eq. (1.13): T→0 β→ e-x→0 except of ground state q → g0 T→ β→0 e-x→1 q → large At 0 K, only the ground state is accessible, while at very large temperatures, all states are accessible. It appears that this has something to do with S, which will be discussed in the next lecture. One can say that q indicates the average number of states that are thermally accessible at T. At this point in time it is perhaps useful to say a few words about a very familiar statistical distribution. The simplest probability distribution occurs when all of the values of a random variable occur with equal probability. This probability distribution is called the uniform distribution. Let’s suppose that the random variable x can assume m different values. Hence, the probability is p = 1 / m. Let’s check up an example with a die. We can ask ourselves what is the probability that the die will land on 5? When a die is tossed, there are 6 possible outcomes, i.e. 1, 2, 3, 4, 5, 6. Each outcome is equally probable and thus we have the uniform distribution. Hence, in our case p (x) = 1 / 6. Right? The Boltzmann distribution keeps track of energy levels, but the major ideas are as undemanding. 8 Denis Music Kinetics For the canonical ensemble, similar procedure can be carried out, but it’s boring to do it, so we list the result for the canonical partition function (Q) as follows Q e   Ei  g e i   Ei i levels (1.14) Noticeably, this looks the same (we have another notation for energy states to distinguish from the microcanonical case), but there is an important difference. In a canonical ensemble, all states are not a priori equally probable. Why? We force the system to do what we exactly want, namely to stay at a certain temperature (remember energy, i.e. heat, can go in and out of the system). Fig. 1.4 Two-level system in a contact with a heat reservoir with two particles: distinguishable (blue and yellow) and indistinguishable (blue only). Possible configurations (microstates that this system can be in) are numbered. Maybe we should present an example. In Fig. 1.4 a two-level system is provided. Let’s consider two cases: (i) particles are distinguishable (e.g. nucleus of deuterium containing a proton and a neutron) and (ii) particles are indistinguishable (e.g. two water molecules in a gas phase). Please bear in mind that the order of particles at an energy level is irrelevant. Let’s use Eq. (1.14) to obtain the canonical partition function for the case (i). There are four configurations, where the second and the third are degenerated. Thus, we have Q e   Ei  e   ( 0 0 )  e   ( 0 )  e   (   0 )  e   (   )  1  2 e    e 2  i or Q g e i   Ei  e   ( 0 0 )  2 e   ( 0 )  e   (   )  1  2e    e 2  levels 9 Denis Music Kinetics Similarly, we can also obtain the canonical partition function for the case (ii) as follows (no degeneracy is available any longer) Q   e  Ei  e   ( 0 0 )  e   ( 0 )  e   (   )  1  e    e 2  i or Q g e i   Ei  e   ( 0 0 )  e   ( 0 )  e   (   )  1  e    e 2  levels Another illustration is perhaps necessary. A one-dimensional harmonic oscillator containing one particle has an infinite series of equally spaced energy levels with Ei=iћω, where i is a positive integer or zero, ħ the Planck constant divided with 2π and ω the classical frequency of the oscillator. This is very common example in nature (or at least a common model system). Let’s derive the expression for the partition function if this system is in contact with a heat reservoir. Using Eq. (1.14), we have 𝑄 = ∑𝑖 𝑒 −𝛽𝐸𝑖 = 1 + 𝑒 −𝛽ℏ𝜔 + 𝑒 −2𝛽ℏ𝜔 + 𝑒 −3𝛽ℏ𝜔 + ⋯ Let’s substitute the first exponential term with x so that Q  1  x  x 2  x 3 ... This is a geometric series and the following is valid 1 1  x  x 2  x 3 ...  1 x Thus, we obtain 1 1 𝑄 = 1−𝑥 = 1−𝑒 −𝛽ℏ𝜔 One may wonder what is the relationship between q and Q? This is where good old Gibbs broke his teeth. What he assumed is that Q  qN (1.15) After checking up some basic thermodynamics, which will be done soon, he realized that this works for some cases, but fails heavily for the others. Needless to say that he was really puzzled. Any good theory should be general and there is no way it can fail in the same field. Something seems wrong and he didn’t solve the problem. This is what is known in the history of thermodynamics as the Gibbs paradox. Only later on when quantum mechanics was introduced, this paradox was resolved. Namely, Eq. (1.15) is valid for distinguishable states as classical mechanics assumes. See our illustration in Fig. 1.4. However, quantum mechanics clearly states that two identical particles cannot be distinguished (two electrons are indistinguishable). Clearly, as 10 Denis Music Kinetics seen in Fig. 1.4, systems with indistinguishable particles produce less configurations. Hence, qN Q N! (1.16) We should still be very careful. For instance, we need to describe crystals as quantum objects, but due to different crystallographic sites we are able to distinguish two Ti atoms in its hcp lattice, whereas we would not be able to distinguish two Ti atoms in a Ti vapor. Please also bear in mind that Eq. (1.16) is valid for a large number of particles and energy states, producing a lot of configurations (it’s always safer to write up Q directly instead of constructing it from q). Now we are done with the basic statistical concepts and we will move on with thermodynamics in the next lecture. Checklist of key ideas: 1) What is a system? 2) What is a configuration? 3) What is a statistical ensemble? 4) What is a microcanonical ensemble and what is its characteristic state function? 5) What is a canonical ensemble and what is its characteristic state function? 6) What are the major steps in derivation of the Boltzmann distribution? 7) What is the Stirling’s approximation? 8) What is the method of undetermined multipliers? 9) What is a microcanonical partition function and how it can be interpreted? 10) What is a canonical partition function? 11) What is the Gibbs paradox? Additional information: The statistical tools presented above embody the foundations for atomistic modeling called molecular dynamics (MD). This method was used in conjunction with Fig. i.i and Fig. 1.2. Students also need to bear in mind that any additional information provided in this compendium is not a part of the exam. This is only meant to illustrate some points or broaden the horizons. So let’s have a look into computer simulations a bit. We could also ask ourselves as to why we do any modeling at all. An example 11 Denis Music Kinetics when computer simulations are very useful is when experiments supply only final results, knowing the input parameters. To understand the process, it is often needed to distinguish all single steps that the system undergoes. Since the majority of atomic processes are extremely fast (1 fs is a short time interval for many experimental methods), computer simulations can be the only option. This is a complementary role of computer simulations to experiments. On the other hand, computer simulations can verify theoretical models at hand and provide determination of physical properties when experiments are either impossible, or too expensive, or even hazardous to human health and environment. MD provides a direct solution of Newton’s equations of motion. A force (F) exerted on each particle, can be written as follows. 𝜕𝐻 𝜕𝑈 𝐹 = 𝑝̇ = − 𝜕𝑟 = − 𝜕𝑟 It is possible to map out trajectories of every single particle (r is the position in some potential U) constituting the object under study, due to the deterministic nature of the Newton equations of motion. In the case of materials science, H, which stands for Hamiltonian, outlines every single particle in the model at hand. It is worth noting that approximations are not made so far in conjunction with the Newton equations of motion and even relativistic effects can be taken into account since F is given as a time derivative of momentum (p) and not just a product of mass and acceleration. Moreover, H provides a good link between quantum and classical mechanics. Most importantly for computer simulations, H endows with use of statistical mechanics, so that the macroscopic averages of experimental observables can be calculated. A conversion of microscopic information to macroscopic parameters, such as pressure or energy, is hence possible. MD simulations provide the time dependent behavior of a studied system. MD simulations generate information on the microscopic level, based on atomic positions and momenta. A state of a system is defined by a set of parameters, for example, temperature, total energy, pressure, volume, and number of particles. Other experimental observables may be derived from the equations of state, for instance the ideal gas equation of state (see Ch. 3), and other fundamental equations in statistical mechanics. A mole of any sample contains some 1024 particles (Avogadro number), so a macroscopic specimen used in an experiment has an extremely large number of atoms 12 Denis Music Kinetics or molecules sampling an enormous number of configurations in a phase space. The microscopic state of a system is defined by atomic positions and momenta, building up a multidimensional space called the phase space. For a system of N particles, the phase space is spread over 6N dimensions. There are two problems arising here. In statistical mechanics, averages corresponding to experimental parameters are defined in terms of ensemble averages. The problem appears to be that one can calculate time averages by a MD simulation, but the experimental parameters are assumed to be ensemble averages. The solution can be found in one of the most fundamental axioms of statistical mechanics, the ergodic hypothesis, which states that the time average is equal to the ensemble average. The basic idea is that if one allows the system to evolve in time indefinitely, that system will eventually pass through all possible states. Therefore, a goal of a MD simulation is to generate enough representative configurations in the phase space so that this equality is satisfied. If this is the case, experimentally relevant information concerning structure, dynamics, and properties may then be calculated using a feasible amount of computer resources. Because the simulations are of fixed duration, one must be certain to sample a sufficient amount of the phase space. This is normally verified by checking if the total energy of the studied NVE system is conserved. MD simulations require some input velocities. If these are not available, which is the case for 99.998% of systems under study, we normally assign initial velocities through the Maxwell-Boltzmann distribution (see Ch. 5). How do we continue from here? There are many approaches and perhaps the most common one is the so-called Verlet algorithm. 1 2 r (t  t )  r (t )  tv (t )  t a (t ) ... 2 1 r (t  t )  r (t )  tv (t )  t 2 a (t ) ... 2 r (t   t )  r (t   t ) v (t )  2 t Another open issue at this point is the scaling of e.g. temperature. This can, for instance, be done via the so-called Nosé-Hoover thermostat. 𝜕𝐻 𝜕𝑈 𝑝̇𝑖 = − 𝜕𝑟 = − 𝜕𝑟 − 𝜉𝑝̇ 𝑖 𝑖 𝑖 where ζ is an arbitrary constant, denoting a mass of a heat reservoir. 13 Denis Music Kinetics Suggested further reading: P. Atkins and J. de Paula, Physical Chemistry, Oxford University Press J. Phys.: Condens. Matter 16, S429 (2004) 14 Denis Music Kinetics 2 Thermodynamics After discussing major statistical tools, namely partition functions and the Boltzmann distribution, we move on with thermodynamics. Now, you will appreciate that these new concepts are useful since we can determine macroscopic behavior stating from a description of energy levels of the smallest constituents. Equivalently as in the previous lecture, we will derive all equations for the microcanonical ensemble and hand-wave through the canonical ensemble. For the microcanonical ensemble we already know that the characteristic state function is S. Let’s see how we can obtain it. Combining the first and the second law of thermodynamics at constant V, we have dU  TdS  pdV  TdS (2.1) Obviously, we need to calculate the internal energy (U). How? Let’s give it a try. Inserting Eq. (1.10) into Eq. (1.5), we obtain N E   ni  i   ei   i i q i (2.2) With a very common trick used in thermodynamics d i  i e i   e d (2.3) we obtain N d N dq d ln q E  q d i e i   q d  N d (2.4) What could we do with this expression? Well, we could obtain U in our quest to build up the whole thermodynamics from basic statistical concepts. What is the difference between E and U? It’s essentially a constant. So far we assumed that the ground state is zero, but generally it doesn’t have to be. Hence,   ln q  U  U 0  E  U 0  N      NV (2.5) This is useful for practical purposes (see problem solving sessions), but it’s not easy to obtain the characteristic state function. Let’s start again from the scratch. Using Eq. (1.5), the internal energy can be written as 15 Denis Music Kinetics U  U 0   ni  i (2.6) i Differentiating Eq. (2.6), we have dU  dU 0   ni d i    i dn i    i dn i i i i (2.7) This perhaps needs to be further discussed. We are not just mathematicians, right?!? These terms have a physical meaning. The first term in Eq. (2.7) is a derivative of a constant and hence zero. The second term can be interpreted as a change in the energy levels (we move them about). This can only be done if work is done on this system. However, a microcanonical ensemble is an isolated system and work cannot be done (from outside). Therefore, this term must be zero. Note that we took this for grated in the previous lecture. The third term means that we move particles around, still keeping N constant, and hence this is allowed. Let’s insert Eq. (2.7) into Eq. (2.1) so that we obtain dU dS   k  i dn i T i (2.8) From Eq. (1.6), it can be extracted that  ln W  i   n i (2.9) Inserting Eq. (2.9) into Eq. (2.8), we derive the so-called Boltzmann equation S  k ln W (2.10) This equation directly correlates thermodynamics with statistics. Let’s see how we could use it. We know that at 0 K, only the ground state is accessible and hence the weight is 1. This means that S becomes zero, which is nothing but the third law of thermodynamics. Many people use this expression to define entropy in the first place. It’s commonly stated that entropy is a measure of disorder of a system. Needless to say, Eq. (2.10) is useful, but we should see if we can express S as a function of q as well. Using Eq. (1.4) and Eq. (1.10), we have S   k  ni ln ni   Nk  pi ln pi  Nk (   pi  i  ln q  pi ) i N i i i (2.11) Using Eq. (2.6) and the fact that the sum of all probabilities is always 1, Eq. (2.11) can be rewritten into S  k(U  U 0 )  Nk ln q (2.12) 16 Denis Music Kinetics Our job is completed since we now have the characteristic state function. We will not spend any time to derive the equivalent equations for the canonical ensemble. Let’s hand-wave and write them up. The internal energy is   ln Q  U  U 0  E  U 0       NV (2.13) The entropy for the canonical ensemble is S  k(U  U 0 )  k ln Q (2.14) The characteristic state function, as already discussed, for the canonical ensemble is A. Let’s derive it from its definition A  U  TS (2.15) Obviously, at 0 K it holds that A0 = U0 so that U U0  A  U T   kT ln Q  A0  kT ln Q  T  (2.16) Let’s have a look at an example. In the first lecture, we dealt with the concept of one- dimensional harmonic oscillator and derived Q. Now, we could calculate A and S. 𝐴 = 𝐴0 − 𝑘𝑇 ln 𝑄 = 𝐴0 + 𝑘𝑇 ln(1 − 𝑒 −𝛽ℏ𝜔 ) 𝑆 = 𝑘𝛽(𝑈 − 𝑈0 ) + 𝑘 ln 𝑄 𝜕 ln 𝑄 ℏ𝜔𝑒 −𝛽ℏ𝜔 ℏ𝜔 𝑈 − 𝑈𝑜 = − ( ) = = 𝑒 𝛽ℏ𝜔 −1 𝜕𝛽 𝑁𝑉 1−𝑒 −𝛽ℏ𝜔 ℏ𝜔/𝑇 𝑆 = 𝑒 𝛽ℏ𝜔 −1 − 𝑘 ln(1 − 𝑒 −𝛽ℏ𝜔 ) The latter equation can directly be obtained from A as follows 𝜕𝐴 𝑆 = − (𝜕𝑇 ) 𝑁𝑉 ℏ𝜔 ℏ𝜔 − − 𝑒 𝑘𝑇 ℏ𝜔 1 ℏ𝜔/𝑇 𝑆 = −𝑘 ln (1 − 𝑒 𝑘𝑇 ) − 𝑘𝑇 ℏ𝜔 (− 𝑇 2 ) = 𝑒 𝛽ℏ𝜔 −1 − 𝑘 ln(1 − 𝑒 −𝛽ℏ𝜔 ) − 𝑘 1−𝑒 𝑘𝑇 Everything seems to be consistent. You must be convinced that statistics is cool! Let’s see if we could further use these new thermodynamic concepts. This will be our first trip to chemistry. Could we determine the outcome of chemical reactions? To do so, we need the equilibrium constant (K). The starting point is the Gibbs free energy (G), which we can rewrite as G  U  pV  TS  A  pV (2.17) Using Eq. (2.16) and assuming the ideal gas equation of state, this becomes 17 Denis Music Kinetics G  G0  kT ln Q  NkT (2.18) With the fact that we do not distinguish particles, i.e. Eq. (1.16), we have G  G0  NkT ln q  kT ln N ! NkT (2.19) Obviously, we can apply Eq. (1.3). Since chemists like to count moles rather than particles, let’s introduce the molar Gibbs free energy (Gm) and molar microcanonical partition function (qm) so that qm G m  G 0 , m  RT ln (2.20) NA where NA is Avogadro number (6.022 × 1023 particles / mol) and R = k NA = 8.314 J / mol K. At standard conditions q m0 G m0  G00,m  RT ln (2.21) NA Let’s apply it for a chemical reaction aA  bB cC  dD qc0,m qd0 ,m qa0,m qb0,m  r G   r Go  RT ( c ln 0  d ln  a ln  b ln ) NA NA NA NA (2.22) G 00, m quantities are summed up in the  r Go term. Rewriting, we have q c0,m c q d0 ,m ( ) ( )d N N    r G0 q 0j ,m j   r G 0   r G 0  RT ln 0 A A q a ,m a qb0,m b   RT ln   e RT  N )  ( ( ) ( ) j A  NA NA (2.23) The equilibrium constant is defined as follows  r G 0   RT ln K (2.24) Comparing Eq. (2.23) and Eq. (2.24), we can express the equilibrium constant as a function of the molar microcanonical partition function as  r G0 q 0j ,m  K  ( RT ) je j NA (2.25) This will further be discussed in the last lecture (Ch. 7), but it’s obvious that partition functions are so powerful since we can predict the outcome of chemical reactions. Checklist of key ideas: 18 Denis Music Kinetics 1) How can one derive the expression for the internal energy in a microcanonical ensemble? 2) What is the Boltzmann equation and how it can be interpreted? 3) How can one express the characteristic state function for a microcanonical ensemble? 4) How can one express the characteristic state function for a canonical ensemble? 5) Discuss the strategy to derive the equilibrium constant as a function of the molar microcanonical partition function. Suggested further reading: P. Atkins and J. de Paula, Physical Chemistry, Oxford University Press 19 Denis Music Kinetics 3 Ideal Gas Ideal gas is a common demonstrator in thermodynamics and more importantly a widespread model system for quite some applications. In order to do any thermodynamics with the ideal gas approximation, we first need to derive the microcanonical partition function. Let’s start with basic quantum mechanics, i.e. the Schrödinger equation H  E (3.1) where H and Ψ are the Hamiltonian operator and wavefunction, respectively. More details on quantum mechanics can be picked up in my course called Quantum Mechanics for Engineers, given in the summer term. Who wants to join? ☺ In this chapter, we will use the approach known as the particle in a box approximation. This is in one dimension a potential  x  0 & x  X V ( x)   (3.2) 0 0  x  X Now, Eq. (3.1) becomes ℏ2 𝑑 2 Ψ − 2𝑚 𝑑𝑥 2 = 𝐸Ψ (3.3) To solve the Schrödinger equation, the following ansatz is taken ( x)  A sin kx  B cos kx (3.4) This gives the following eigenvalue ℏ2 𝑘 2 𝐸= (3.5) 2𝑚 With the boundary conditions implied by Eq. (3.2), we have n n 2h 2  ( 0)   ( X )  0  k  E  n 2 X 8mX 2 (3.6) It is worth noting that n = 1 , 2 , 3 , …, where 0 is not allowed since this would in turn mean that there is no particle as Ψ = 0 explicitly. Now when we know the energy levels, we can obtain the partition function from Eq. (1.12) and some standard approximations (summation replaced by integration and starting anyway from n = 0 instead of n = 1 since this makes the math a hell lot of easier) 20 Denis Music Kinetics    q x   e  n    e  n  dn   e  n  dn 2 2 2 n 1 1 0 (3.7) Let’s introduce a new variable x2 = n2βɛ  1/ 2 1  2m  X e  x2 qx  dx    (3.8) (  )1 / 2 0    h Note that solving these kinds of integrals will be discussed soon in Ch. 5. In three dimensions (V = X Y Z), Eq. (3.8) becomes ( 2m) 3 / 2 q  qx q y qz  V (3.9) 3 / 2 h 3 With this microcanonical partition function, we can finally show that Eq. (1.11) holds. The equipartition theorem states that the mean energy associated with each degree of freedom of a monatomic ideal gas is the same, i.e. 1/2 kT (more argumentation will be given later on in Ch. 5). Hence, the internal energy is 3 U  U0  NkT 2 (3.10) We can also obtain the internal energy from Eq. (2.13) and Eq. (1.16)   ln Q  3N U  U 0     U 0  (3.11)    NV 2 Moreover, the equipartition theorem provides a convenient way to derive the corresponding laws for extreme relativistic ideal gases, such as neutron stars or white dwarfs (our Sun will end up in this state in many billion years). Comparing Eq. (3.10) and Eq. (3.11), Eq. (1.11) immediately follows. Finally, the homework assignment from the first lecture is done now. Note that the result is the same for both canonical and microcanonical ensemble. Let’s see what else we can cook up in the canonical ensemble. In any case, we need the Helmholtz free energy since it’s a characteristic state function. From Eq. (2.16) and Eq. (3.9), we have  2 mkT 3 / 2  A  A0  kT ln Q  A0  kT ln N ! NkT ln V   h3  (3.12) Using the first and the second law of thermodynamics together with Eq. (2.15), we obtain the following dA  dU  TdS  SdT  TdS  pdV  TdS  SdT   pdV  SdT (3.13) 21 Denis Music Kinetics Now we can obtain pressure (p) and entropy. Let’s start with p.  A  1 ( 2mkT ) 3 / 2 1 p     NkT  NkT  V  NT ( 2 mkT ) 3/ 2 3 h V V 3 h (3.14) This is of course the ideal gas law, our equation of state. Surely, you can say that you knew it all along, but it’s really cool to see this great consistency and we have actually derived it. It’s not just experimental observation any longer. One more comment we should make here. Remember the Gibbs paradox! If we had used Eq. (1.15) instead of Eq. (1.16), nothing would have changed in Eq. (3.14) since the volume dependence has nothing to do with N! at any time. So far so good, Gibbs would say. Let’s see what happens with the entropy of this system. It can be written as  A   2mkT 3 / 2  S      kN ln N  kN  Nk ln V   T  NV  h3  1 2mk 3 / 2 3 1/ 2 (3.15) NkT V T 2mkT 3 / 2 h3 2 V h3 Rearranging the terms in Eq. (31.5), we have  A   2 mkT 3 / 2 5 / 2  S     Nk ln V e  (3.16)  T  NV 3  Nh  This is the so-called Sackur-Tetrode equation. Obviously, it’s very important to use Eq. (1.16) and not Eq. (1.15). Gibbs didn’t know this and hence the paradox, which we have just lifted. The Sackur-Tetrode equation was known from experimental observations (note it contains the Planck constant) before the theory of quantum mechanics was discovered. Now, we have explicitly derived it. It can also directly be shown that the Sackur-Tetrode equation is consistent with isothermal expansion. Checklist of key ideas: 1) What model was used to derive the microcanonical partition function for monoatomic ideal gas from the Schrödinger equation? 2) State the equipartition theorem. 3) How did we obtain β? 4) Discuss the Gibbs paradox in terms of the ideal gas concept. 5) Derive the ideal gas law from the corresponding partition function. 22 Denis Music Kinetics 6) Derive the Sackur-Tetrode equation from the corresponding partition function. Additional information: The solution of Eq. (3.3) for the potential given in Eq. (3.2) is obtained for the energy levels only, i.e. Eq. (3.5). For the completeness sake, one also needs to obtain the wavefunction. This is not relevant for this course, but for those students having more interest, the math is given as follows. Essentially, we only need to normalize the wavefunction by finding A. nx X X   dx  A 2  sin 2 2 dx 0 0 X 1  sin axdx  ( ax  sin ax cos ax ) 2 2a 1  n nx nx  X X 2 X 2  2  dx  A 2  x  sin cos  A 1  A n  X X X 0 2 X 0 2 X 2 nx ( x)  sin X X Suggested further reading: P. Atkins and J. de Paula, Physical Chemistry, Oxford University Press 23 Denis Music Kinetics 4 Applications of Kinetic Theory The exponential dependence of energy in the Boltzmann distribution is commonly called the Boltzmann factor. This kind of functional dependence is used to describe many phenomena, ranging from pressure changes in the atmosphere, kinetics of chemical reactions, phase transitions and diffusion, to evaporation, electron emission and ionization. Energy experiences different forms (see Fig. 4.1). It can be found in the form of heat, potential energy due to gravitation field, kinetic energy, enthalpy, the Gibbs free energy and so on. In all cases considered in this lecture, many atomistic phenomena are quite complex and demand a lot of mathematical machinery, but we will be brave and use the knowledge we have gained so far to approximately describe these. Let’s not say “Luke, use the force”, let’s rather say “Luke, use the exponential term”. Fig. 4.1 Various forms of energy. We start with the so-called barometric formula. It describes how pressure changes with altitude. Climbing a mountain can really be fun, but a bit hazardous if the peak is too high. The air becomes thinner, people would say, and this is essentially what is described by the barometric formula. Let’s derive it. Our atmosphere is quite complicated so we need quite a bit of approximations. Let’s assume a canonical ensemble. Hence, from Eq. (2.16) and Eq. (1.16) we can obtain 24 Denis Music Kinetics A  A0  NkT ln q  NkT ln N  NkT (4.1) This can be used to estimate the chemical (total) potential (μ) of a particle with mass m in the gravitation field (g is the acceleration of gravity, 9.8 m/s2)  A  N    mgh  kT ln  mgh (4.2)  N  VT q At equilibrium, i.e. at some height h and at the ground, the chemical potential should level out, i.e. μ (h) = μ (0), so that N (h) N ( 0) kT ln  mgh  kT ln (4.3) q q After rearranging the terms in Eq. (4.3), we have mgh  N  N 0e kT (4.4) Assuming the ideal gas law, i.e. Eq. (3.14), we can obtain the barometric formula mgh  p  p0e kT (4.5) Obviously, the pressure decreases as h increases. This expression is commonly used and was known before the era of partition functions. A more traditional derivation comes from fluid mechanics, but it won’t be discussed here. It can also be noted that by adding e.g. kinetic energy in Eq. (4.2), as provided by the partition theorem (see Ch. 3), would not alter Eq. (4.3) due to cancellation of terms. Fig. 4.2 Energetics of an exothermic reaction. 25 Denis Music Kinetics The next application is in the field of chemical kinetics. As discussed earlier, as soon as we borrow the concepts from chemistry, let’s stick to counting moles, rather than particles. This means that energy is in the units of kJ/mol and kT becomes RT, where R is the molar (or universal) gas constant (8.314 J/mol K). An example of an exothermic chemical reaction (the change in enthalpy ΔH is negative) is given in Fig. 4.2. Another term appears for the first time; it is the so-called activation energy (EA). It is an energy barrier that must be overcome in order to trigger a process (see the example below), as in this case a chemical reaction. Even though this reaction is exothermic, a small little “push” is required and this is what determines (limits) the rate (v) thereof. Let’s assume we have the following reaction A+B→C (4.6) as illustrated in Fig. 4.2. For the forward reaction (index f), only the activation energy needs to be overcome to start this reaction. Hence, intuitively we have EA  v f  c f [ A][ B ]e RT (4.7) since we need to cross the barrier EA and the rate would be larger if concentrations (in square brackets) are larger. The symbol c designates a constant. It is common to rewrite Eq. (4.7) as v f  k f [ A][ B ] (4.8) where k is the rate constant for the forward reaction. Please don’t mix it up with the Boltzmann constant or the equilibrium constant! Obviously, any rate constant k can generally be written as EA  k  Ae RT (4.9) which will further be discussed and the functional dependence will be motivated (see Ch. 6). Note that we have rewritten c into A. Let’s plot ln k versus 1/T. This kind of plot is called the Arrhenius plot (see Fig. 4.3). It can be used to model the temperature-variance of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally-induced processes/reactions. Examples will be given in the problem solving sessions. 26 Denis Music Kinetics Fig. 4.3 Arrhenius plot. Obviously, the slope is proportional to the activation energy, a positive number, and this the most common experimental way to obtain it. Let’s spend some time to interpret this equation. The constant A designates how many times we tried to overcome the barrier EA or simply frequency of attempts, which is the reason to call it the frequency factor. The Boltzmann factor designates the probability of successes and its product with A is the number of successful attempts. One more comment must be made about the activation energy. It is unfortunately path dependent (it is affected by the surrounding). This was figured out by Ostwald and he found ways to alter it. Nowadays, we call it catalysis. A catalyst is not consumed in a chemical reaction; it is rather used to speed up the process by decreasing EA. The catalyst works by providing an alternative reaction pathway to the reaction product. For instance, the disproportionation of hydrogen peroxide to give water is a very slow reaction. Upon addition of small amount of catalyst MnO2 this reaction becomes very fast. Time is money, thank you Mr. Ostwald! After these details on the activation energy, let’s move on with the further discussion of Fig. 4.2. The forward reaction is described in Eq. (4.7) and for the backward reaction (index b) two barriers need to be overcome so that H  E A  v b  cb [C ]e RT (4.10) At equilibrium (vf = vb), we have 27 Denis Music Kinetics H [ A][ B ]   ce RT (4.11) [C ] Obviously, the activation energy does not describe the equilibrium. It is rather important for kinetics. Generally, the Boltzmann factor is the key ingredient. Fig. 4.4 Schematics of Al migration in TiCx (left) and corresponding energetics (right). The equations valid for chemical kinetics can be applied for other phenomena. For instance, phase transitions can also be described with Fig. 4.2 and corresponding equations, but it’s rather the Gibbs free energy that is the preferred choice of energetics. In any case, the activation energy appears again as an important concept. These kinds of energy barriers are also present in diffusion processes. Figure 4.4 shows the schematics of Al migration in TiCx (xm 1 T

Use Quizgecko on...
Browser
Browser