Turtles at the Bottom 🐢

Andrew August's Notebook

Notes on Thermodynamics

Thermodynamics is a framework for modelling the macroscopic properties of matter. It’s a coarse-grained approach that allows us to describe systems in terms of a small number of observables, despite there being an enormous number of degrees of freedom at the microscopic level.

When is Thermodynamics?

As the saying goes, “all models are wrong but some are useful”. Thermodynamics is no exception, it’s only accurate under the following conditions:

States

When the conditions listed above are true, we can characterize the system using a small number of state variables:

Although this looks like a 5-dimensional state space, in practice these variables are related to each other in a way that reduces the overall dimensionality. For example, an ideal gas with fixed \(N\) has

or without constants:

Thus, an ideal gas’s state is really only two dimensional. For example, to the full state all we need to know is \((P,V)\) or \((S,U)\) or \((T,V)\), etc. But note that not all pairs of variables work, for example \((P,S)\) doesn’t determine \(U\).

Energy

The energy of a thermal system is defined by its internal energy \(U\), which is simply the total kinetic and potential energy of each of its particles.

To calculate \(U\) we could theoretically sum each particle’s energy, but thermodynamics is about analyzing large systems using a small number of macroscopic variables, ignoring underlying microstates. When thermodynamics was developed no one actually knew what matter was made of, so such a sum wasn’t an option, but the thermodynamic definition of \(U\) still existed, and that’s what we use here. In particular, the first law of thermodynamics describes energy flow and says that internal energy changes through just two mechanisms: heat \(Q\) and work \(W\)

\[\Delta U = Q + W\]

Work

Work is defined as energy transferred by a force acting through a displacement. “Displacement” usually refers to a change in the spatial coordinate of a particle, but in thermodynamics it’s defined more generally as changing an extensive variable. An extensive variable is any quantity that defines the size or amount of matter in a system, such as volume, mass, etc.. This is in contrast to intensive variables such as temperature and pressure which do not change when a system’s volume, for example, is cut in half.

There are many different types of work. For example, if a system contains charges in a potential \(\phi\), we can “displace” charge by adding an amount \(dq\). The work done in this case is

\[dW_{\text{electrical}} = \phi\ dq\]

Similarly, if we displace a system’s volume \(V\) against a pressure \(P\) then the work done is

\[dW_{\text{mechanical}} = -P\ dV\]

The negative sign is used to make decreases in volume increase internal energy. Going forward, I’ll focus on mechanical work and refer to it simply as \(W\). Let’s look at two common processes involving work.

Isothermal processes are those where temperature is held constant. In practice, this is acheived by placing the system in contact with a constant-temperature reservoir and changing the system so slowly that its temperature remains equal to the reservoir at all times.

How much does the system’s internal energy change in such a process? As an example, an ideal gas has \(U = U(T)\), so its internal energy doesn’t change. So where does the work go? It’s all dissipated to the environment as heat. To find out how much heat, we use the first law:

\[\begin{align} Q &= -W \\ &= \int P\ dV \\ &= NkT \int_{V_i}^{V_f} \frac{1}{V}\ dV \\ &= NkT \ln \frac{V_f}{V_i} \end{align}\]

The other type of process is called adiabatic. It’s where no heat enters or leaves the system. In practice, this is acheived by thermally insulating the system from its environment. How does internal energy change in this case? From the first law,

\[\Delta U = W\]

We can’t calculate \(W\) like for the isotherm because \(T\) isn’t constant. Instead, we have to use an explicit formula for \(U\). For an ideal gas we have

\[\begin{align} dU &= dW \\ \rightarrow \frac{f}{2}Nk\ dT &= -P\ dV \\ \end{align}\]

Inserting the ideal gas equation of state and integrating yields

\[VT^{f/2} = \text{const}\]

which can be written

\[PV^{(f+2)/2} = \text{const}\]

Now, given \((T_i, V_i, V_f)\) we can solve for \(T_f\) and use it to calculate the change in internal energy as \(U(T_f) - U(T_i)\).

Heat

Compared to work, heat is much more passive. It’s defined as energy transfer due to a temperature gradient between a system and its environment.

In terms of terminology, systems don’t “have” heat—heat is a mechanism of energy transfer, not a property of a system. In common language we say objects “are hot” but really we mean to say they’re at a higher temperature than something else and can therefore transfer energy to it via heat. “Hot” object don’t physically contain something called heat.

The amount of heat transferred between a system and its environment depends on what the system is made of, and this is measured by a quantity called heat capacity:

\[C = \frac{\partial Q}{\partial T}\]

Heat capacity is defined as the amount of heat needed to raise a system’s temperature by 1K. Or, as I like to think of it, the energy gained/lost when the system equilibrates with a reservoir that was originally 1K hotter/cooler than it.

Heat capacity depends on the constraints of the process, specifically whether it occurs under constant volume or constant pressure conditions. Under constant volume no work is done, so

\[\begin{align} C_V &= \left( \frac{\partial Q}{\partial T} \right)_V \\ &= \left( \frac{\partial U}{\partial T} \right)_V \\ \end{align}\]

Under constant pressure, volume does change, so

\[\begin{align} C_P &= \left( \frac{\partial Q}{\partial T} \right)_P \\ &= \left( \frac{\partial (U -W)}{\partial T} \right)_P \\ &= \left( \frac{\partial U}{\partial T} \right)_P + P\left( \frac{\partial V}{\partial T} \right)_P \\ \end{align}\]

For solids and liquids, volume doesn’t change much with temperature, so the second term in \(C_P\) can be ignored, making \(C_V \approx C_P\). For gases, however, volume changes significantly with temperature. For an ideal gas,

\[U = \frac{f}{2}NkT\]

so

\[C_V = \frac{f}{2}Nk\]

and

\[C_P = \frac{f}{2}Nk + Nk\]

The last thing I’ll note about heat is that \(C\) is defined for a given phase–solid, liquid or gas. If a phase transition occurs during heating then the heat energy goes to break chemical bonds and temperature stays constant.

Path Dependence

l;k

Heat Engines

Heat engines are systems that convert heat into work.

Potentials

\begin{align} df = \grad f \cdot d\mathbf{r}
\Rightarrow \int_{\mathbf{r}1}^{\mathbf{r}_2} df = \int{\mathbf{r}_1}^{\mathbf{r}_2}\grad f \cdot d\mathbf{r} \end{align}

The energy of a thermodynamic system is captured by the fundamental thermodynamic relation: \(dU = T\ dS + \sum_i X_i\ dY_i\)

Here, \(X_i\) is a generalized force and \(Y_i\) is the corresponding generalized displacement, together they account for a type of work. Here are a few examples of force-displacement pairs:

Type of work Generalized Force \(X_i\) Generalized Displacement \(Y_i\)
Mechanical \(-P\) \(V\)
Surface tension \(\gamma\) \(A\)
Electrical \(\Phi\) \(Q\)
Chemical \(\mu\) \(N\)

For the rest of this discussion I’ll only consider systems that change volume, i.e. those involving mechanical work, but the results are easy to generalize.

The fundamental thermodynamic relation assuming only mechanical work becomes

\[dU = T\ dS - P\ dV\]

Path Dependence

“Doing” thermodynamics generally means calculating how the temperature, pressure, volume or energy of systems change when they interact with eachother. For example, if a gas is heated, what happens to its volume? Or if a liquid evaporates, how much heat does it have to absorb?

To answer questions like these, it’s sometimes enough to know only the initial and final state of the system, like its initial and final temperature. Other times, however, it’s necessary to know the path the system follows through state space from its initial state to its final state. For example, consider the amount of work done on an ideal gas:

\[W = -\int P\ dV\]

Imagine the gas transitions from state \((P_1, V_1)\) to state \((P_2, V_2)\) via the paths shown in the diagram. We have

\[\begin{align} W_A = P_1\ (V_2 - V_1) \\ W_B = P_2\ (V_2 - V_1) \end{align}\]

the two works are different, which demonstrates that work is path dependent.

As an example of path independence, consider the internal energy of an ideal gas. The internal energy is given by \(U \propto T\), so the change in internal energy during any process only depends on initial and final states, namely temperature.

Can we determine if a quantity is path dependent without explicitly computing it for different paths and comparing? Yes—using a mathematical thing called exact differentials. The differential expression

\[F(x,y)\ dx + G(x,y)\ dy\]

is called an exact differential if there exists a function \(f(x,y)\) such that

\[F = \frac{\partial f}{\partial x} \ \text{ and }\ G = \frac{\partial f}{\partial y}\]

For example, the expression \(y\ dx + x\ dy\) is an exact differential because it’s the total differential of \(f = xy\). It turns out that exact differentials are path independent, so we can calculate the total change in \(df\) using just the endpoints:

\[\int_{\mathbf{r}_1}^{\mathbf{r}_2} df = f(\mathbf{r}_2) - f(\mathbf{r}_1)\]

On the other hand \(y\ dx\), for example, is an inexact differential because there’s no function \(f\) such that \(df = y\ dx\). Note that I’ve written these differentials in terms of two variables \(x\) and \(y\), but in general they can depend on any number of scalar variables.

Thus, in thermodynamics, if a quantity is an exact differnetial then we can compute it by simply looking at end points, otherwise we have to be more careful and consider exactly how the change occurs. Exact differentials will be denoted by \(df\), while inexact differentials will be denoted by \(\delta f\) to remind us that path matters.

Reversibility

Reversibility is an important idealization in thermodynamics in much the same way that “frictionless” is an important idealization in mechanics: it removes all dissipative effects so we can understand the underlying phenomenon before accounting for real-world losses.

Reversible processes are quasistatic and frictionless, where quasistatic means that change is so slow the system and environment are arbitrarily close to equilibrium at all times. Of course in the limit of true reversibility the process is so slow that it doesn’t happen, which is what makes this an idealization. As we’ll see, if a system and environment can be returned to their initial state without producing entropy then the process is reversible.