Thermodynamics Notes PDF

Summary

These notes provide an introduction to thermodynamics, focusing on the fundamental concepts and applying them to biological systems. The document explains the importance of understanding energy exchanges between a system and its environment. It explores the crucial role of thermodynamics in understanding biological processes and phenomena.

Full Transcript

Thermo = heat and dynamics refers to motion. Thermodynamics is a scientific discipline that was originally developed to better understand steam-engines, which use heat to put things into motion. But it turned out that it can be applied much broader, even in life sciences. Thermodynamics is the reaso...

Thermo = heat and dynamics refers to motion. Thermodynamics is a scientific discipline that was originally developed to better understand steam-engines, which use heat to put things into motion. But it turned out that it can be applied much broader, even in life sciences. Thermodynamics is the reasoning about the forces that allow flow of material, about what makes molecules move, react with one another, make proteins fold and catalyse reactions, make some molecules bind well and other poor, allows kidney cells to take up salts against concentation gradients, and in general allows life to organise itself in ever more complex structures, as we shall see. Without “the Force” there is no flow (dynamics), and without flow, there is no Life. It is therefore essential basic knowledge for anyone studying Life Sciences. The force can be described in terms of energies, as was traditionally done in the steam-engine time when the theories were invented. But there is a more intuitive way, I think, which is based on probability and statistics – yes it will be more intuitive. It is our ambition to provide biomedical students with this intuitive understanding of the molecular forces of nature, rather than let them simply apply equations and formulas they do not understand. Thermodynamics has a universal name for the object of investigation: the system. In the steam-engine days, the system referred to the engine. In life sciences, the system is often an organism or a cell, or sometimes even a molecule. Here, we will also sometimes define an imaginary system, for example a box containing four particles that bump around vigorously. It is always important to be very aware of what the system is. Another important term is the environment. With environment we mean everything outside the system. Thermodynamics provides us tools to investigate the exchange of energy between a system and its environment. Another term that you will encounter in thermodynamics is variables. With variables we mean the parameters that influence the behaviour of the system. A special class of variables are state variables. They define the state in which the system is, irrespective of how it got there. Now that sounds a bit fuzzy. Let's try to get this clear. Consider yourself (the system) sitting on the couch at the end of the day. State variables are for example temperature, place and position. How much work you did during that day, for example, is not a state variable. Also how you moved to that couch, whether you walked there or crawled there, and which path you followed, are not state variables. The latter parameters do not describe the state of the system but rather the process that let to that state. State variables are again divided in two: intensive and extensive state variables. The distinction between the two types is most clear if you combine two systems: extensive variables add up, whereas intensive variables average out. Consider again yourself sitting on the couch. Someone sits down next to you, in close contact. We will now consider the two of you as one new system. What about your mass? That adds up. So mass is an extensive state variable. What about your temperature? That is still the same, so temperature is an intensive state variable. Volume? Adds up --> extensive. Pressure? Does not add up --> intensive. One can often recognise pairs of state variables consisting of one intensive and one extensive variable. In such pairs, a difference of one (intensive) leads to exchange of the other (extensive) to reach equilibrium. For example, if we bring two objects of different temperature in contact, they will exchange heat until both have the same temperature. In this example, temperature can be recognised as intensive state variable and heat as extensive. Another pair are pressure and volume. If you combine two balloons filled with gasses, their volumes will add up, so that is an extensive state variable. Their pressures does not add up, so that is a intensive state variable. There will be an exchange of volume (one will grow at the expense of the other) that results in a more equal pressure. A system has a certain amount of internal energy, symbol U or sometimes E. U has many contributions, and it is often not possible to determine U for a system. Fortunately, that is not a problem because in thermodynamics we are only interested in changes of energy, ∆U. The greek capital delta (∆) is generally used to express a change. If the internal energy of a system decreases, for example because a chemical bond is broken, that energy is released and can be used to do work or to produce heat. Nothing else. Let's say our system does volume work: the system uses some internal energy to push and grow. So a decrease of internal energy leads to a positive work. The symbol for work is 𝑤. And in addition some internal energy is reseased as heat. Heat (symbol q) flows from the system to the environment. A flux of heat away from the system is indicated as negative heat flow, whereas a positive q means that heat is put into the system. If we put this into an equation, we get: check: positive work > U goes down ✓; positive heat flow, U goes up ✓) This is known as the first law of thermodynamics: energy is conserved. It cannot disappear. Nor can energy be produced out of thin air. It can only be transformed. From a macroscopic perspective you already know this: Potential energy from a ball is converted into kinetic energy once released from a height, and the kinetic energy will be converted into heat once the ball lies still on the ground. But what about energies at the molecular scale? When dealing with molecules, we talk about internal energy, and it comprises of many different contributions: translation-energy: rate of movement in space rotation energy: molecules can rotate vibration-energy: movement of atoms within a molecule binding energy: energy in the chemical bonds between atoms (electrons) potential energies caused by intermolecular interactions (H-bonds, Vanderwaals forces) electron energies: energies of electrons within an atom We see that again we have kinetic and potential energies, and we can think of most of them in the way we think about macroscopic objects. So molecules are balls (atoms) attached via springs than can bend and vibrate. Potential energies arise from “potentials”, forces, in most cases Coulomb based rather than gravitational forces. An example of work done by a system is the expansion of a gas against a constant pressure: the amount of work done by the system is p ∆V (note, when we write p ∆V we mean p times ∆V, short for 𝑝⋅Δ𝑉 ; this is common-practice in math) and the change in energy of the system will be: ∆U = - p ∆V Since the system will do work if ∆V > 0, and p is also positive, the minus sign ensures that the energy content of the system drops if it does volume work. Thermodynamics was invented for steam engines and in that realm, volume and pressure and heat are the most important quantities. Steam engines work at constant volume (Steel barrels) and varying pressure (puffing sound = release of pressure). However, biology generally works a constant temperature and pressure and (slowly) varying volume (growth). So if we consider -for now- only volume work and heat exchange at constant temperature and pressure, then to keep track of energy we get: ∆U = ∆eU = q - p ∆V (I.3) Where the subscript e is used to emphasize that this is an exchange process: heat is exchanged from one object to another. Since we consider from now on only systems at constant temperatures, energy is being exchanged between the system and the environment, in the form of heat exchange. So we are often interested in this heat, at constant pressure but changing volumes, which we call qp where the subscript indicates that we work under constant pressure p. This heat is (rewrite Eq I.3): 𝑞𝑝=Δ𝑈+𝑝Δ𝑉≡Δ𝐻 (I.4) Here ≡ means “by definition”, and ∆H is called the enthalpy. It is defined as H = U + pV. As you can see, it is the heat released under constant pressure, and enthalpies are much more convenient than energies as biology and chemistry most often work under constant pressure and not under constant volumes. Enthalpy and energy differences are the same except for the volume work. In many biological processes, volume work is negligible and Δ𝑈≈Δ𝐻. However, in some processes volume work cannot be ignored. For example when gasses are produced the effect is quite striking. As an example of the effect of volume work on heat release, let us look at the fermentation (by yeast cells) of 1 mol of glucose to ethanol and CO2 at constant temperature (298 K): C6H12O6 (aq) ⟶ 2 CO2 (g) + 2 C2H5OH (aq) If heat production is measured when the volume is kept constant (and hence, the pressure increases from the CO2), we measure: ∆U = qV = - 54.1 kJ The subscript “v” indicates “constant volume”. The change in energy between the reactants is 54.1 kJ mol-1; this will be released as heat because the system is in thermal contact with the surroundings. However, at constant pressure the volume would increase -as it does in dough risening- and part of the energy is used to carry out volume work. At constant pressure, the heat released into the environment is: qp = - 49.1 kJ1 The difference of 5 kJ is caused by the work done under constant pressure: ∆U = q - 𝑤 Rewriting: qp = ∆U + 𝑤 = ∆U + p∆V ≡∆H = - 54.1 + 5 = - 49.1 kJ The story so far is as follows: if a biochemical process occurs (in a system), most often a chemical reaction, and this process results in an energy difference, between the reactants, then that energy will be released to (or taken up by) the surroundings. We can keep track of this exchange of heat using a balance of enthalpy, as we consider systems at constant pressure – and temperature, of course. You all know that a gas will expand in a vaccuum, or that a drop of ink will spread through a jar of water. You also know that a cold beer becomes warm and hot coffee cold (well, they will both reach the temperature of the room). Have you ever wondered why this is? You may at first instance think: pressure difference, or a concentration or temperature gradient provided the force. Okay, but why is pressure a force, what is it that drives energy transfer, and then only if there is a temperature difference? What is the universal driving force behind this movement? In all cases it is … probability: a state in which molecules or energy quanta are spread around many different places turns out to be (much) more probable than a state in which they are confined in a specific spot. You then only need to find it logical that spontaneous processes move from an improbable state to a more probable state, and this change is driven by an increase in probability. This is, in fact, the second law of thermodynamics, as we will see. Let’s do some calculations on probabilities for a simple case, to support the basic idea. Suppose we have 4 identical molecules that move in and out of two compartments (see Fig. I.1). Molecules stand still at a temperature of 0 K, but at biologically relevant temperatures (around 300 K) they have huge amounts of kinetic energy and race around at speeds of 500 m s-1 or more. Obviously in watery environments or even in air they will very quickly bump into each other and so molecules show random, bumpy walks through their environment, but still very fast. What are, at any time point, the possible states the system can be in, and what are the probabilities to observe the system in that state? Since we defined two compartments, the different states that the system can be in, are 4:0, 3:1, 2:2, 1:3 and 0:4 (Fig 1), where 4:0 means 4 molecules are in compartment I, and none in compartment II, etc. For defining these so-called “macroscopic states”, it does not matter which of the molecules are where: only the total number in each compartment matter. If we assume equal volumes and other properties of the compartments, then the chance p for an individual molecule to be in compartment I is 0.5, and in compartment II also 0.5 (remember probabilities always add up to 1). The chance to find the system in state 4:0 is then p(molecule 1 is in compartment I) x p(molecule 2 is in compartment I) x p(molecule 3 is in compartment I) x p(molecule 4 is in compartment I) = 0.5 x 0.5 x 0.5 x 0.5 = (0.5)4. There is only one way to achieve this (obviously also for state 0:4). If we consider state 3:1 (or 1:3), however, there are 4 ways to get to this state: which of the 4 molecules is alone, does not matter. So the probability to find this state is 4 x (0.5)4! We say that there are 4 ways, so-called microscopic states, in which the macroscopic state “3:1” can be achieved. This number is often represented by W, and called the multiplicity. So W = 4 for the 3:1 and 1:3 states in the example. The number is W = 6 for the 2:2 case. It is important to understand the difference between the microscopic and macroscopic perspective. From the microscopic perspective, every molecule is unique and is cruising the spaces it can occupy: every microscopic state where every molecule is labelled and traced, is equally probable. It is only when we consider the overall, macroscopic state, that difference in multiplicity and thus probability arise. Compare it to throwing two dice: the microscopic state (1,1) is equally likely as (3,4) or any other combination (the probability of each being 1/36). The macroscopic state “2”, however, is only achieved by one combination of dice (1,1), whereas the state “7” has multiplicity of 6 {(1,6),(6,1),(2,5),(5,2),(3,4),(4,3)}. If you throw unbiased dice often enough, you will see that you observe “7” 6 times more often then “2”. In general, for cases of two options, we can easily compute the probabilities for larger number of molecules, and the important lessons to be learned is that the probability to observe equal number of molecules in both compartments will approach 1 very quickly with increasing number of molecules (see Box 1.1 for some relevant numbers in biology and metabolism in particular: they are much larger than 4, in most cases). So if the system is, at time t=0, in a state 105:0, the system is in a very improbable state: the molecules will quickly move around until all molecules are equally divided, as that is by far the most probable state the system can evolve to (see also Fig. 1.2). So the system will show a strong tendency to spread the molecules evenly, and once it has done this, the system is at the most probable macroscopic state and will stay there: there is no other place to go. This most probable state is called equilibrium. From a macroscopic point of view, we should consider the concentration of molecules (so the number of molecules per volume), and we are in equilibrium if the concentrations of our compound is equal in both compartments. So diffusion is in essence a process caused by random movement of blind molecules that spontaneously moves towards an increase in probability as differences in concentrations decrease, because a homogeneous spread of particles through a space is more probable. Concentration differences can be considered as driving forces for macroscopic, average, movement of molecules. I like to call this driving force a “probability drive”. If the concentration gradient is completely dissipated, there is no driving force anymore, and the system is in (diffusive) equilibrium. We will see that this is a general statement, and also for more complex cases, such as biochemical reactions, equilibrium is achieved when there is no thermodynamic driving force anymore. So what about expansion of a gas in a vaccuum? The same argument holds: if we increase the volume, there are many more ways in which the molecules can be arranged in that space. If we would think of that space (volume) as a collection of pixles (little boxes), and in every pixle a molecule can be present, or not, then if the number of pixles is equal to the number of molecules, there is only one way to arrange the molecules, and W=1. If we just add 1 pixle (so increase the volume), then W=n+1. So increasing the volume increases the multiplicity, and therefore the state with 1 more pixle is n+1 times more probable. The probability drive, or force, in this case is better known as “pressure difference”. If there is a pressure difference between two systems, then changing volumes to decrease the pressure difference will lead to an increase in probability: this will therefore happen spontaneously, in time. Again, if the driving force is zero, we say that the system is in (mechanical) equilibrium. The measure of probability, or multiplicity, is called entropy (symbol S). It was Bolzmann who linked entropy and probability, through an equation that is engraved on his gravestone (Fig I.3): where k (kB really) is the Bolzmann constant (1.38 × 10-23 J K-1 since you asked) and the log is the natural logarithm (not 10log!). So we better write: S = kB ln W (I.6) Entropy is therefore a measure of probability. Highly structured systems (such as all molecules localized in one place) have low multiplicity and hence low entropy. The second law of thermodynamics states that in any spontaneous process the total entropy needs to increase. What about the transfer of heat – symbol q – that turns your coffee to room temperature? Same story: heat is a flow of energy and this will occur if there is a probability drive, which in this case is defined as a “temperature difference”. There is one important difference between macroscopic objects and molecules: at the molecular level, energy is quantitized. It comes as defined packages of fixed energy. We will not try to understand this here, but you have probably had some form of teaching on the atom model with electrons cruising at fixed orbits around the nucleus. These fixed orbits correspond to fixed energy packages. Spectroscopy is based on this: photons of specific wavelength and hence energy can by absorbed by electrons to jump to a higher energy quantum. It turns out that this quantitation applies to the other forms of internal energy. The consequences for thermodynamics are important, as we can now understand why energy flows from an object of high temperature to an object of low temperature: it again increases multiplicity and thus entropy (see figure I.4). To understand the figure, you need to realize that if we combine two systems A and B, the multiplicity of the total system Wtot = WA x WB. Temperature (T) is actually defined formally based on the change of entropy upon a change in energy. A temperature difference is therefore again a probability drive: it indicates how much entropy can be gained if energy (in the form of heat) is exchanged. The way in which we will see this in the formulas that will follow, is: Δ𝑒𝑆=𝑞𝑇 (I.7) Where the subscript e is used to emphasize that this is an exchange process: heat is exchanged from one object to another. So far so good: the second law of thermodynamics tells us that in an isolated system, spontaneous molecular events will proceed, pushed by thermodynamic driving forces until we reach equilibrium: the most probable state where the probability drive is zero (maximal entropy). The problem for us is, that biological systems are not isolated: in humans molecules live at a constant temperature (of 37°C or 310K) and so the heat of any chemical reaction will be released to the environment – otherwise the temperature of the human body would rise. Another way of saying this: the body is in thermal contact with the environment. But heat exchange has an effect on entropy, which is actually negative for the object that provides the energy (see Fig I.4 on the previous page); the only thing we know for certain - from the second law -, is that upon exchange of heat, the sum of the entropies of the biological system and the environment must increase! So, if there is exchange of heat – because we are in thermal contact to keep the system at constant T- there is exchange of entropy and we cannot be certain anymore that the system will evolve to the most probable state: only the system plus the environment will: Isolated system: equilibrium when Ssys is maximal System at constant T: equilibrium when Ssys + Senv is maximal So what is the state that the system will reach in equilibrium, then? For this, we need to track all the changes and exchanges of entropies, to make sure the second condition holds. For this, we need some techniques and concepts that will now be explained. First, we need balances: if you want to track whether a certain activity was profitable, you need to keep track of all costs and incomes. So we are going to do some book keeping. It is useful to define two types of processes on the balance: internal changes and exchanges. Remember extensive variables: these were the ones you can count and add up: for those we can make balances: ∆X = ∆eX + ∆iX (I.8) This balance states that the change (∆ or Delta) of some quantity X (amount of euros, number of glucose molecules in the blood, energy) can change because X is exchanged (∆eX), i.e. imported or exported, or because X is being produced or consumed within the system (∆iX). Both ∆eX and ∆iX can contain more than one term, where each term represents a different, independent process. If we apply it to the system blood, then we have red blood cells (ertyhtrocytes) inside the system that consume glucose, and there are processes that take glucose up from the blood (e.g. muscle, brain) and processes that put glucose in the system (liver, intestine). In this case the balance for the number of glucose molecules is: ∆glucose = ∆eglucose (muscle, 0) + ∆iglucose (erythrocyte 0 (I.12) In other words: a closed system will always develop towards equilibrium, because that is the state with highest entropy. But how about a biological system? How about a yeast cell, or an animal? Let's consider a single cell. The cell is highly ordered, which is certainly not a highly probable state with maximal entropy. How can such an ordered cell develop and maintain itself? The crux is that the cell (or any biological system) is not an isolated system, not to gasses, and certainly not for heat exchange. Inside that cell, many combustion reactions have taken place, and these have generated a lot of energy that was exported to the environment. This is export of entropy (remember: ∆eS = q/T = ∆H/T). So - in contrast to energy -entropy can be produced, and entropy can be exported, and we need to make a balance to see if the total entropy in the universe has increased: ∆Ssys + ∆Senv = ∆Ssys - ∆Hsys/T > 0 (I.13) Multiplying with T gives: T∆Ssys - ∆Hsys > 0 (I.14) The magic is that we can make a statement about the entropy increase of the universe by only looking at the system! We make a balance and score the exchanges of heat (and later, molecules) and we can decide if a process can occur or not. For historical and steam-engine reasons, the famous scientist J.W. Gibbs (Fig 3.1) defined what is now called Gibbs energy, as: G ≡ H – TS, and hence, we can write, for a spontaneous process: ∆Gsys = ∆Hsys - T∆Ssys < 0 (I.15) You have to realise, that this Gibbs energy equation is best seen NOT as an energy balance, but as a balance for entropy: compare equations I.15 and I.13! Equation I.15 implies that, if a spontaneous process takes place in a system, Gibbs energy is lost. This makes little sense if Gibbs energy were an energy (energy was conserved, first law of thermodynamics, remember?). If you understand that entropy, the measure of probability, is produced from nowhere during a spontaneous process driven by the probability drive, then you can understand that the negative of that, Gibbs energy, can be lost during a spontaneous process. In open systems, processes are driven by the loss – we call this dissipation – of Gibbs energy. It is the same old probability drive, but now having taken the exchange of energy with the environment into account. There is also an interpretation of Gibbs energy as an energy (see Box I.2), in which Gibbs energy is the maximal amount of work that can be extracted from a process. This is also how Marks explains it. It is my experience that students struggle with the reason why this must be so, and I hope the probabilistic view is more intuitive. Box I.2 Gibbs energy as an energy We stress here the notion of Gibbs energy as a measure of entropy. Gibbs, however, defined it with the notion of energy in mind. Gibbs energy is often called free-energy, which means “free to do work”. If Gibbs free energy is lost during a spontaneous process, what is lost, is the ability to do work (see figure). In situation A1 there is a force that allows work to be done. In situation A2, some of that work has been done (FB h), and some of the potential energy has been used to drive the process and is now “lost” as heat. In situation B1, there is an entropy-driven driving force (chemical potential 𝜇 ) that allows to do chemical work. In B2 that driving force is zero, as the concentrations in both compartments are equal, and the system is in equilibrium. Entropy is produced, Gibbs energy is lost, i.e. the ability to do work. The Gibbs energy difference between two states A and B is the maximal amount of work that can be done if the system moves from A to B. The maximum is only achieved if one works at (almost) zero rate. In case A1, if block B would have equal mass as block A, no energy would be lost as heat, but the process would be infinitely slow. Since a spontaneous process is driven by Gibbs-energy dissipation, the system will be in equilibrium if there is no such force anymore (Box I.2). The system then is at its minimal Gibbs energy. Where isolated systems strive towards maximal entropy, biochemical open systems strive towards minimal Gibbs energy. Why does ice melt above 0˚C and not below it? Here you can see thermodynamics in action. Let's consider the melting reaction ice -> water. For this reaction to happen, ∆G must be negative. ∆G = ∆H - T∆S. (I.15) Melting ice requires input of heat, it absorbs heat from the environment. ∆H>0, it works agains melting (it increases ∆G). Melting increases the entropy. In ice, all molecules are at a fixed position, so W=1. In liquid water molecules can move around, so W is much larger. ∆S>0, it favours melting (it decreases ∆G). So who wins, ∆H or ∆S? That depends on the temperature! In the equation for ∆G you can see that ∆S is temperature dependent whereas ∆H is not. So at low temperature ∆H > T∆S and ice remains solid. At higher temperature ∆H < T∆S and ice melts. At T=273K (0˚C) ∆H=T∆S and ∆G = 0. Nothing happens. Our system is at equilibrium. This is in fact a universal rule: at equilibrium ∆G = 0. Many processes are temperature dependent. The table below shows which processes happen at low or at high temperature. Remember: ∆G = ∆H - T∆S, must be negative for the reaction to happen.

Use Quizgecko on...
Browser
Browser