The Laws of Thermodynamics [Podcast Scripts]

The episodes on the laws of thermodynamics were first released in November 2018.

First Law
Second Law
Third Law

You can’t win

Hello and welcome to this episode of Physical Attraction! We are, and defiantly remain, the show that explains ideas in physics, one chat-up line at a time: and today, we’re going to be talking about the Laws of Thermodynamics. Thermo is a prefix that means, temperature and heat — and dynamics is the study of how something moves when it’s forced, or unforced. So it’s about the flow of heat, but there are profound consequences for how we think about ideas like energy, too.

But first: it’s this episode’s physics-based chat-up line. Because these ones are awful, even by my usual standards, I’m going to give you a few to choose from.

“You must store a lot of your energy internally, because you’re pretty damn hot.”
“Let’s convert our chemical potential energy to sound and heat… if you know what I mean.”
“They say that perpetual motion is impossible, but we’ll give it a go…”

There you go — feel free to deploy those at your leisure, and feel the sweet, stinging heat of tears rolling down your face as you’re rejected once again.

The laws of thermodynamics are basically statements about how energy behaves, and effectively what heat and temperature is. They’re incredibly useful, because they describe natural processes — but also allow us to calculate upper theoretical limits. Most systems that we know of ruthlessly obey the laws of thermodynamics: such that, whenever anyone tries to tell you that they’ve created a machine that violates them, you usually know they’re wrong. The original descriptions of some of the laws of thermodynamics were in terms of heat engines, and the field of thermodynamics is incredibly important in terms of engineering the world around us. But they can also be applied to systems on the largest possible scale — including the Universe itself.

I’ve named this series of episodes after a mnemonic for the laws of thermodynamics: ‘you can’t win, you can’t break even, and you can’t get out of the game.’ Makes it sound like you’re playing a pretty unfair, loaded game, but we’re all used to that, right? This corresponds to the First, Second, and Third laws — but actually, there’s also a Zeroth law that we need to cover briefly. It was kind of added on afterwards, which is why the numbering system is so messed up, but it’s still important.

So the zeroth law of thermodynamics basically defines what temperature is. It’s what you measure with a thermometer! We all have an intuitive idea, I think, that temperature is kind of — a measure of hotness — and, microscopically, you can talk about temperature as being how much energy the molecules that make up a substance have. If the substance has low internal energy, then the molecules won’t vibrate very much; if it has high internal energy, they’ll jiggle about like mad. But this isn’t a universal definition of temperature.

So the zeroth law says that that if two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other.

So, wait a minute, back up — what’s a system? What’s thermal equilibrium? And why does this matter?

In thermodynamics, a ‘system’ is a region of space that you can describe by specifying a certain number of physical quantites. So maybe it’s the air in a jar, or maybe it’s the Universe itself. The idea is that you can say: this region of space has a certain temperature, a certain pressure, a certain amount of internal energy, and it occupies a certain volume. We call these ‘state variables’; they basically tell you what state your system is in. There are walls (real or imaginary) around the system.

An open system lets matter — stuff — flow in and out. Maybe a bucket with an open top. A closed system doesn’t let matter flow in and out, but heat can flow in and out of the boundaries; maybe a bottle of water is a good approximation. And an isolated system doesn’t let heat or matter flow in or out — kinda how your thermos coffee flask is supposed to work.

When two systems are connected, but in thermal equilibrium, it means that heat — energy — isn’t flowing between them. Imagine taking a jar of cold air out of a freezer. Now the jar system is in contact with the system of the room that you’re in. Heat will flow into the jar from outside, obeying the laws of thermodynamics. Eventually, they come to equilibrium — balance — after the cold air has heated up to the temperature of the room. If you leave it long enough, they will be at the same temperature — and then, no heat is flowing. Heat *could* still flow — the jar and room are still connected — but because they’re at the same temperature, no heat does flow between them. They are now in thermal equilibrium.

So this is why the zeroth law defines temperature. It says that, regardless of what your substances are made of, regardless of what they are — if you have two systems each in equilibrium with a third, they will also be in equilibrium with each other. Which makes sense: if you have two objects in the same room, at the same temperature, you wouldn’t suddenly expect them to transfer heat energy to each other when you put them together. So we can say: if two things are at the same temperature, no heat flows between them. That’s the zeroth law.

And this also means that thermometers work. When you put a thermometer in your hand, or use it to take someone’s temperature, you wait for the thermometer to come into thermal equilibrium with the thing you’re measuring. The fact that thermal equilibrium exists lets you measure temperature.

Just a quick aside about physical measurements of temperature. As so often in science, we’re lumbered with units and terms that exist for historical reasons, but don’t really make sense any more. I’m sorry, Americans, but Fahrenheit is a terrible scale. It was based on nothing particularly special. 0 degrees Fahrenheit is the coldest temperature Mr Fahrenheit could get to — the air temperature of his home town. 96 degrees Fahrenheit was just his body temperature. He chose 96 so he could make a nice number of marks on his thermometer. These units are very idiosyncratic — they make no sense. People say that Fahrenheit is better for weather because 0–100F is kinda the range of temperatures on Earth, but this is just an excuse. It really means “we’re used to it and we can’t be bothered to change now” — but anyone who’s used Celsius for a while will be able to tell you what 30C, 20C, 10C, 0C feel like. No one is so sensitive that they need a more precise scale and can tell the difference between, say, 19C and 19.5C.

Celsius is a little bit better, because it’s based on degrees we can all agree on. Water is pretty common on Earth. It freezes at 0 and it boils at 100. The scale makes sense, and is based on physics. It’s time to join the rest of the world and use a scale that makes sense. It doesn’t have to be difficult. Or accept that you’re just sticking with it out of stubborn-ness and we’ll let you alone.

Kelvin is the best, though. When the first scales were invented, people didn’t realize that there was a minimum temperature. The Kelvin scale takes this into account, and starts at absolute zero. This means you can directly relate it to the energy you’re talking about. Whenever physicists do calculations, they use Kelvin: and they only convert to Celsius or Fahrenheit to talk about the weather, or impress non-scientists. Rant over.

What’s happening on the microscopic scale when things come into thermal equilibrium with each other? Imagine a box filled with air. In the box, air molecules are bouncing around. They travel at different speeds, collide with the walls and each other, and when they collide, they exchange energy. The collisions with the wall are what means that gas exerts pressure on the wall — they’re constantly pushing on the wall, which pushes back. Some of the gas molecules will get lucky, and they’ll have lots of collisions that add to their momentum, so they’ll be travelling quickly. Others will be travelling very slowly until they get kicked by a faster molecule. So you have a distribution of speeds. But, you can define a certain average energy that the molecules have — some might be faster, some will be slower, but there will be an overall average. This average is related to the temperature. If you heat the gas from the outside, maybe by putting the box on a stove, this average energy goes up. The temperature of the gas has increased, and so the molecules are moving faster.

Now imagine we inject a little bit of hotter gas into the gas in the box. The hotter gas molecules are moving around faster than the rest of the gas in the box. And these hot molecules will bash into the colder molecules, transferring energy. The way collisions work is that energy is usually transferred from the fast-moving to the slow-moving object — we know this from the world around us. Initially, this system is not in thermal equilibrium. This is because there’s a section of gas that is at a different temperature to the rest. It exerts more pressure on the walls, because the molecules of gas are hitting the walls more quickly. Because it’s not in equilibrium, we can’t tell you the “state” of the system — it doesn’t just have one temperature, or one pressure.

But, very soon, these new molecules will collide with the old ones, and give up some of their energy. The temperature of the gas overall will slightly increase, as this new energy spreads around the whole population. It’s a little bit like how trickle-down economics is supposed to work — a few rich people show up and gradually spread their wealth until everyone has some. Soon, the system will be in equilibrium again, at a new temperature and pressure.

Do collisions stop? No, the molecules are still colliding. Sometimes some energy will go to the newer molecules, sometimes it will flow to the older ones — but the system as a whole is still in equilibrium. The energies are all mixed up. If you put a thermometer in the gas, it would settle quickly to a specific temperature. It’s the same when two bigger systems are in equilibrium — their molecules might hit each other, giving each other energy, but overall, there is no net transfer of energy — no overall flow of heat.

So there we go. The zeroth law.

The first law of thermodynamics is the one that says: “you can’t win.” And this essentially says that energy is conserved, for thermodynamic systems. You probably already know that energy can’t be made or destroyed — only converted into other forms. This is a fundamental law that applies to all of physics — so much that energy conservation is usually the first thing physicists try to write down an equation for, when they’re faced with a new problem.

Energy can change from form to form, and it can move around, but you can’t make it or lose it.

So, for this, we need to go back to our definition of a system. The First Law basically says that energy in a thermodynamic system can be in three forms: internal energy, heat, and work. And the change of internal energy of the system is the same as the heat supplied to the system, plus the work done on the system.

“Work”, in physics, is a force applied across a distance. It’s a mechanical transfer of energy. So, when you lift an apple, you’re doing work on that apple, against the force of gravity. When a car moves along a rough surface, its engine is doing work against the force of friction. Even when you’re at work, and you’re pressing down on the keys, your fingers are doing work against the spring mechanisms. [Rihanna sample]

So let’s think about our gas box again. If you can push a valve to compress the box of gas, you’re applying a force. The force acts against the pressure of the gas molecules on the box — and it moves the valve across a distance. So, when you push the valve, you’re doing work on the system, and giving it energy. The First Law of Thermodynamics says that the internal energy — the energy of the gas molecules in the box that we talk about — can change by doing work, or by heat flows. You can let heat flow into the box, and the internal energy increases. You can compress the box by doing work, and the internal energy increases. Alternatively, heat could flow out, or the gas could push the valve across a distance and do work on the outside world. But whatever happens, the energy is always conserved, and always in one of these three forms. If heat flows out of the box, and you’re not replenishing it by doing work, the gas must be losing internal energy. If you do work, with no heat flow, the internal energy must increase — and so on.

You might be thinking — okay, but is there really a difference between heat and work, if they’re both forms of energy? And if they’re both transferred by speedy molecules crashing into walls and so on? It’s a fair point to make. Heat and work are really both kinetic — movement — energy in molecules. But in heat, this work is disordered — the directions of the motions are random. When you heat the gas, everything moves faster, but there’s still no overall direction. When you do work, you’re pushing in a specific direction. The molecules will start to move more in that direction. Hence the distinction. Heat, then, is the flow of the kinetic energy of these molecules from hot places — where the molecules have lots of energy and move quickly — to cold places, where they don’t. This energy can flow by physical collisions, as in conduction and convection, or by radiation that passes between the molecules.

The first law of thermodynamics, as well as being incredibly useful, has one super important consequence. You can’t create something from nothing. In other words, you can’t “win”. You can’t create energy.

For almost as long as humans have been tinkering with devices, they’ve dreamed of a perpetual motion machine. The idea is simple — either something that can generate energy from nothing to continually move, or else something that is a perfect store of energy — with no friction at all, that doesn’t dissipate. You might think to yourself — the planets, in their orbits, seem to be a perfect store of energy — they continue rotating around forever; but, in actual fact, they suffer ‘friction-like’ effects too, due to things like the stellar wind of particles from the Sun, collisions with the (sparse) matter in outer space, and gravitational radiation.

The laws of thermodynamics forbid perpetual motion machines. The first law says you can’t have a perpetual motion machine of the first kind, which creates energy out of nothing, because energy is conserved. The second law bans the other kind. So, truly, in thermodynamics, you can’t win. And that old Simpsons joke — where Lisa builds such a machine and Homer yells at her “Little lady, in this house we obey the laws of thermodynamics!” totally works.

Not that people haven’t spent ages, and ages, and ages trying. Lots of people have tried to build them using all kinds of techniques. Many inventors, especially before thermodynamics was really understood, thought that magnets were producing energy from nothing. You can see why, too — put two magnets close together and they fly across the room towards each other, as if they’re pulling kinetic energy out of the sky. That’s exactly what they are doing, in a sense: the energy is stored in the electromagnetic field, which is set up in a particular way because of the configuration of the magnets. But you can’t use this to produce limitless energy — and you can see why, right? If you want to produce more energy, you need to pull the magnets apart again — do work on the system — so energy is still conserved.

Lots of perpetual motion designs also relied on gravity. So there are examples where you have complicated arrangements of water in buckets, that tips over and rotates a wheel. The problem is that there is always some energy dissipated — when the water hits the bottom of the bucket, for example, it will generate some heat which leaks out into the atmosphere. There’s also usually some friction in the wheel. There are an amazing number of different designs for this, from engineers and inventors across the world, from Bhaskhara to Leonardo da Vinci. Some of these contraptions can be well-designed, and can move for quite a long time, but perpetual motion is impossible. Most of the most convincing people who’ve tried to achieve it have had some dirty secret — there’s a man behind the curtain frantically pedalling to drive the wheel, or a massive secret array of hamsters spinning in wheels to power the device, or the whole thing is just an elaborate con.

The Royal Society banned all future proposals for perpetual motion machines back in 1775, probably sick of figuring out the various ways that people had tried to break the unbreakable laws of physics. Despite this, people have been claiming to have perpetual motion machines since the dawn of time, and they still do! Nowadays, they’ll usually dress up their claim by pointing to some random aspect of physics, or waving their hands and saying “quantum oogly-boogly”, but it’s the same old nonsense, every time. A notable example recently was The motionless electromagnetic generator (MEG) was built by Tom Bearden. Allegedly, the device can eventually sustain its operation in addition to powering a load without application of external electrical power. Bearden claimed that it didn’t violate the first law of thermodynamics because it extracted vacuum energy from the immediate environment.[42] Critics dismiss this theory and instead identify it as a perpetual motion machine with an unscientific rationalization.[42][43][44][45][46] Science writer Martin Gardner said that Bearden’s physics theories, compiled in the self-published book Energy from the Vacuum, are considered “howlers” by physicists, and that his doctorate title was obtained from a diploma mill.

Alongside this, and amazingly in my view, Steorn Ltd claimed that they had built a device based on rotating magnets back in 2009 that could generate more power than it took to run it; they solicited scientists to test their claims. With a lot of swagger, promising a revolutionary source of “clean, free energy”, they managed to attract an incredible 23 million euros in investment. But a public demonstration was cancelled at the last minute due to mysterious technical difficulties; the jury of scientists said that the technology didn’t work.

Amazingly, they continued to pull this stunt after being publicly rebuked by scientists and failing to demonstrate their technology on the world stage. In May 2015, Steorn put an “Orbo PowerCube” on display behind the bar of a pub in Dublin. The PowerCube was a small box which the pub website claimed contained a “perpetual motion motor” which required no external power source. It was powered by a battery.

The whole thing was an elaborate con — not even an original one — and in November 2016, the company went into liquidation, after stringing along the gullible and the people who didn’t believe in science and squeezing every last drop out of them.

Ironically, I guess they did make something from nothing — they had no workable technology, and they made 23 million euros from it. All I will say is — if you don’t believe the laws of physics, you have to believe the obvious reality that any person who successfully developed a perpetual motion machine would be ludicrously wealthy, and many of the world’s problems would be solved. I guess they don’t work. It’s amazing that, more than 200 years after the Royal Society said perpetual motion could never happen, people still fell for it. If someone tries to sell you a perpetual motion machine, DO NOT BUY IT. Obey the laws of thermodynamics. You have no choice.

Thank you for listening to this episode of Physical Attraction. You can’t win, but I hope you had fun finding out why.

PLUGS — FACEBOOK TWITTER OTHER SHOWS MEDIUM WHATEVER’S FRESH (everyone tell one friend, billions of listeners)

Until then… don’t waste your time trying to build a perpetual motion machine. Just eat a piece of toast and watch the evening news.

You can’t break even (Second Law — might want to make this more poetic than it is, you only get one shot at ranting about entropy)

Hello, and welcome to this episode of Physical Attraction. Today, we’re going to be dealing with one of the most important, and one of the most philosophically… beautifully depressing aspects of physics — The Second Law of Thermodynamics. We’ll get into the knotty and fascinating definition of entropy, and what it means to say that the world is always becoming more disordered. Because it turns out that, in life and in physics… you can’t win. You can’t even break even.

[Define Second Law]
[Talk about thermodynamic entropy, perpetual motion machines, Carnot, heat into work]

To understand the Second Law, you need to know about entropy. Entropy is a complicated concept, so that you’ll see it defined in lots of different ways. You can think of it as a measurement of how disordered a system is. You can also think about it in terms of the ‘quality’ of the energy that you have. Some energy can be used to do work — it’s ‘organised’ energy, like the energy of a driving force pushing in one direction. Some energy can’t be used to do work — it’s disorganized, random fluctuations of molecules of gas in a jar, for example. These ideas — entropy as disorder, and as a measure of how useful energy is. Work is high-grade energy, and heat is generally low-grade energy.

But scientists didn’t just go from nothing to a definition of entropy. So, to really understand, we have to go back into history to see how the Second Law first came about.

Sadi Carnot was an interesting man. He was a French engineer and mathematician, and his father had been part of the Directory that had taken over France after the French Revolution in the 1790s. He was investigating engines. Now, we should define this: an engine is basically a process that turns heat into work. So, the engine in your car is converting the heat that’s generated when you burn the petrol into work against friction — organized motion in the direction that you’re driving the car.

The efficiency of an engine is what fraction of heat can be converted into work. So, an engine with an efficiency of 1 means that you can convert all of the heat into useful work. Some engines had been designed already then, but people were constantly looking for new improvements in their efficiency — so that less fuel would be required to produce the same amount of work. Their ideas were usually along the lines of — what if we use something different in our engine? Most engines used steam to work, but maybe there was a substance that was better at converting heat into work.

Engines require the heat to flow. So, generally, there are two parts of the engine. There is a source of heat energy — at some hot temperature, Tsource — and a sink of energy, at some cold temperature, Tsink . What Carnot showed, purely mathematically, purely theoretically, was something amazing. It didn’t matter what substance you used in the engine. There would always be a maximum theoretically possible efficiency.

It was

Efficiency = 1 — Tsink/Tsource.

So you can see from this formula a couple of consequences. You can never have an engine with an efficiency of 1, that perfectly converts heat into work. It will always be less than 1. To get a perfect one, you’d need Tsink/Tsource to be 0. Remember, we’re using absolute scale of temperature here, so 0 degrees is the coldest anything can possibly get. But that would need either the cold sink to be at absolute zero temperature, which is impossible, or the source of heat to be at an infinite temperature, which is also impossible. This is what it means when the Second Law says — you can’t break even — in every possible process, there are heat losses, energy losses, that dissipate throughout the Universe and are gone forever. No way to recapture them. You’ll always lose something. You can’t break even.

And in reality, the situation is even worse. This efficiency, 1 — Tsink/Tsource is called the Carnot efficiency, and there are usually other processes that stop you from getting there. Alongside this, it’s super difficult to get the temperatures to be very different in practice. Imagine the difference between steam and ice being your source and sink. Then, you have a Tsink of 273K and a Tsource of 373K — 0 and 100 degrees Celsius. You might think ice and steam are very different in temperature. But that only gives you a Carnot, maximally efficient efficiency of about 27%. In reality, we can use superheated steam to get a better temperature difference, but the real-world efficiency of thermodynamic processes is rarely more than 30–40%. And this theoretical result has huge consequences: in all of our fossil fuel and nuclear power plants, where we convert heat into work, most of the heat that’s generated by burning the fuels is lost. It’s not that we build inefficient machines. It’s that the laws of physics — and, specifically, the second law of thermodynamics — prevent us from making them any better than this.

The ideal engine, then, is called a Carnot engine. Which leads us seamlessly onto this episode’s physics-based chat-up lines.

“Are you a Carnot cycle? Because you’re as close to perfect as it’s possible to be.”

Or, if you’re feeling cynical and jaded…

“Are you a Carnot cycle? Because you’re ruthlessly efficient but technically unattainable.”

This all might seem a little bit specific and abstract. Maybe it even sounds a bit like engineering and not physics! But heat engines are just how Carnot thought of this version of the Second Law. What he’s touched upon doesn’t just apply to heat engines. He had stumbled upon a fundamental law of physics that affects any flow of energy, from the molecules in our bodies to the shining stars.

Unfortunately for Sadi Carnot, the brilliance of his discovery wasn’t appreciated for quite a long time. It just seemed like a lot of theoretical rubbish about heat engines, and perhaps the engineers were still convinced they could beat the laws of physics. Or maybe the physicists weren’t willing to appreciate the genius of an engineer.

It took until later in the 19th century for physicists to pick up where Carnot had left off. First, Rudolph Clausius made a statement that might seem pretty obvious. He pointed out that heat flows from hot objects to cold objects. And, of course, we know this from our day-to-day lives. But he noted that the only way to get heat to flow from cold objects to hot objects — so the cold one gets colder, and the hot one heats up — is by doing work. There has to be some change, somewhere else in the system.

You can think of temperature like a hill. Heat naturally flows downhill, from hot temperatures to cold ones. If you want to push heat uphill — from cold temperatures to hot ones — you need to do some work.


Then Lord Kelvin came along. And what he realized was, in a way, completely related to what Carnot had figured out decades before. He said: it’s impossible to have a cyclical (repeatable, cycle process) that completely converts heat into work.

In other words, there always has to be a flow of heat from hot to cold.

You can convert some of the heat into work, but there always has to be some remaining as heat that goes from hot to cold. So, you can see, this is really similar to the Carnot efficiency argument. You can’t have a perfectly efficient heat engine that turns all of your heat into work.

By now, if you’re unfamiliar with thermodynamics, your head is probably set to explode. We have four different versions of the Second Law?

First, heat engines have this maximum efficiency that depends on the temperature difference.
Second, heat flows from hot to cold and you need to do work to push it uphill from cold to hot.
Third, you can’t completely convert heat into work — there always has to be some heat flow.
Fourth, is this idea that you often see expressed — in an isolated system, entropy must always increase.

All of these statements turn out to be the same thing, or parts of the same thing. The Second Law of Thermodynamics. But seeing this is initially really difficult. I’ll release a bonus episode explaining how Kelvin and Clausius are the same thing.

Last episode, we talked about what is meant by a thermodynamic system, and for a system to be in a certain state. We said that a ‘state’ is like an equilibrium for the system, so nothing is changing with time. And you can characterise a state by saying — this is the temperature, this is the pressure, this is the volume, etc. Actually, if you have knowledge of the substance you’re dealing with, you can figure out one of these from the other, using the equation of state. The most famous one is the ideal gas law, which says

pV = nRT. That means the pressure, multiplied by the volume, is equal to the amount of stuff you have multiplied by the temperature. This might seem complicated to remember, but it’s actually not too bad when you think about what you’d physically expect to happen. If you increase the pressure while the volume stays the same, the temperature has to go up, because pV has gone up, so nRT must go up. Similarly, if you increase the temperature at constant pressure, the gas will expand, and so on. This equation is incredibly useful for predicting how thermodynamic systems behave — and it means you only need to measure a few quantities and you can work out the last one.

But this is what’s called a macrostate. Macro means big. There are also ‘microstates’ for a system. To understand this, it’s best to go back to our favourite example — atoms of gas in a jar. The macrostate is, okay, how many atoms are there, what’s the temperature, volume, pressure. The *microstate* is, okay — what are the positions and speeds of every atom in the gas jar? So you can see that there are billions of possible microstates, because each atom of gas could be in all kinds of positions and have all kinds of speeds. But, actually, many microstates will correspond to a macrostate. You can have lots of configurations, lots of positions and velocities for atoms, that will still give you the same temperature, pressure, volume, when you measure these properties for the whole gas. So individual macrostates have many, many microstates associated with them.

Each microstate has a certain probability, a certain likeliness to happen. And this too kind of makes sense, when you think about it. It’s really unlikely that all of the molecules will crash into one molecule at once, making it super-hot, or crash into one wall of the jar at once, so there’s no pressure on the others. Microstates where all the molecules have a fairly similar energy and whizz around happily are far more likely to occur — they have a higher probability.

So where is all this going? Well, in some sense, you can describe the entropy of a macrostate for the system as the number of microstates that the macrostate has. If there are billions of ways of distributing energy among the molecules that give you the same temperature, pressure and so on — that’s a very high entropy state. If there are only a few ways you can distribute these things, then the state is low in entropy. And now you can begin to understand what we mean by saying that entropy is like a measure of disordered things. Imagine the atoms in the gas again. There are many microstates where they’re all just whizzing around randomly, with no particular order or reason to what they’re doing. But there are fewer microstates where they’re all moving in one direction — when you add some order to the system.

Here’s another way of thinking about it. Remember that whole fun story about an infinite number of monkeys bashing away at an infinite number of typewriters? If you give them long enough, then they will presumably type everything that can possibly be typed, including the complete works of Shakespeare. In fact, this is exactly how we produce the scripts for these episodes, which is why it takes so long to get new ones out.

But we all know that if you actually tried this experiment for real — well, PETA would be on to you, and the monkeys would fling poop everywhere, and things would fail. But we also know that the monkeys would produce a hell of a lot of gibberish. Sheets and sheets of things that just didn’t make sense, random letters and numbers, kinda like you’d expect a monkey to type.

This makes sense. The complete works of Shakespeare — even the English language, or Hungarian for that matter — are very ordered systems. They have a lot of order, a lot of structure. And, in all the possible states for a set of letters to be in, there’s only one microstate that is the exact, complete works of Shakespeare. There’s only one microstate that is Fifty Shades of Grey, or the script to this episode. But there are billions of microstates that are disordered, jumbled nonsense. When we look at them, there might only be a few that we recognise as novels. The rest are basically indistinguishable macrostates — books filled with disordered junk. So you can begin to see this idea of there being many more disordered microstates than there are ordered microstates.


We owe this idea of entropy to Boltzmann, a genius physicist. It can be written:

S = k log W.

S is the entropy. K is a constant that has the units of entropy. W is the number of microstates we can have — the number of ways of arranging the molecules of a system to achieve the same total energy, the same state. So this will tell you what the entropy is, and the entropy increases when there are more microstates.

This is a problem, though. Because we think that if you leave an isolated system alone for long enough, and don’t do anything to it, what it does is gradually explore all of its microstates. In the atoms in the gas jar, this is by collisions, heat and energy exchange — pushing you into new microstates. And, eventually, inevitably, it will tend towards the ones that are high-probability. It will tend towards one of the billions of microstates that are disordered. It will tend towards disorder and chaos, and, in an isolated system, entropy will always increase.

If you inject some order into the gas jar — maybe by dispatching a fleet of molecules moving in one direction — the system will quickly explore the many microstates where those molecules are all jumbled up, moving in random directions. The order will vanish. Entropy has increased.


And now, maybe, you can see how this is the same as the other statements of the Second Law. If we could convert heat — disordered motion of atoms and molecules — into work, which has order, is useful, and can go in one direction: then entropy would not have increased. And, if we could get heat to flow from cold to hot, then we’d be reducing overall entropy. This one is a little bit harder to see, but the change in entropy is basically the flow of heat divided by the temperature. Hence the chat-up line:

“Let’s increase the sum of our entropies. Baby, it’s cold outside.”

Let’s explain this further. Imagine you have a really cold system, and you supply a little bit of heat. Suddenly, there might be ten or a hundred times more energy than there was before. The molecules can occupy way more states than they could before. The entropy has increased by a lot. If you supply that same little bit of heat to a hot system, it doesn’t change things all that much. The system can occupy a few more states than before, but not so many.

Maybe another analogy goes like this. Imagine you start singing, at the top of your lungs, “Angels”, by Robbie Williams. Now, I’m sure your singing voice is lovely, but this does make the world more chaotic. If you start singing “Angels” in the middle of a football match, it won’t have much of an impact. Everyone is screaming and shouting already. It’s already very chaotic. Your extra bit of chaos will not change things much. This is like heat flow at high temperatures. But consider an exam room where students are taking an important thermodynamics exam. Things are very ordered, very structured; entropy is low. If you start belting out karaoke classics in the middle of the exam, you will cause a lot of chaos. This is like a flow of heat into a low temperature system. Both times, the amount of heat — your wonderful singing voice — is the same. But, because one system started off disordered (high temperature) and the other was ordered (low temperature), the entropy is different. So, dumping heat from a cold place into a hot place — somehow making the exam room quieter by pumping the noise into the football match — that’s against the laws of nature!

So now, I hope, you’re understanding the Second Law of thermodynamics a little bit more. Entropy, which is like disorder and chaos, will always increase.

This is a really, really, fundamental mathematical idea. It seems very poetic — and it is very poetic, this idea that things always tend to decay, that order and structure can’t be reserved, that if you leave things alone they’re consumed by rack and ruin and chaos… but it’s also a mathematical consequence of the laws of physics. This is reality.
Famous scientist Arthur Eddington put it like this:

The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

Sir Arthur Stanley Eddington, The Nature of the Physical World (1927)

You might try to think up counter-examples. Fridge-freezers make things colder, right? They decrease entropy? Well, yes, but they also pump out heat at the back — more heat than they remove from the objects. Entropy in the Universe as a whole always increases. Say that I tidy my room, and arrange things neatly — I have, by my actions, increased the order of the Universe. By typing these words, I am producing order — structure — where before there was chaos. As you listen to this, hopefully, neurons in your brain are firing in an ordered way. Structure is arising. Entropy is actually decreasing… right?

But this is because the things I’ve talked about are not isolated systems. When I tidy my room, I generate heat doing it, and that heat energy makes the world around me slightly more disordered. I haven’t done anything to decrease entropy in the world — I’ve just fought against entropy in my bit of it, by creating some more somewhere else. As I type these words in my laptop, even the transfer and storage of information — information of any kind, on a hard drive, whatever — is generating heat. This heat makes the Universe more disordered. There is no physical way to store or process information without increasing the entropy of the world as a whole. Even in your brain, as the neurons fire, as you learn about entropy from me — your brain might have a little bit less disorder in it (I hope!) as I explain, but the Universe as a whole… entropy always increases.

And when this was discovered, it was philosophically troubling. Ludwig Boltzmann, the man who basically invented the field of statistical mechanics — which explains everything in terms of microstates and entropy changes — was deeply disturbed by the Second Law of Thermodynamics. Because entropy will always increase, and there’s no way to stop it, it means that the evolution of the Universe is progressing along one inevitable path. All forms of energy, eventually, find themselves turning into heat. All of that heat gradually dissipates across the Universe. Things are, gradually, but inexorably, becoming more and more chaotic. The stars will burn up and burn out. All of their heat will spread across the Universe. We know, now, from Cosmology, that the Universe will expand forever. As entropy increases, slowly, gradually, inevitably — all of the energy will turn into heat and spread across the Universe, as things unravel and tend towards the state of maximum disorder. In this world, the energy is spread thinly across an ever-expanding Universe. In this heat death state, nothing can exist but a dull, cold soup of particles and photons. There are billions upon billions of microstates available, but they’re all highly disordered, with no structure in them. There can be no life. There can be no thought. This is the ultimate fate of the Universe — entropy maximised. Nothing we can do will stop it. Everything we try makes it worse. Regardless of how beautiful or sturdy a palace we build, it will one day be dust and ashes. It was this realization, many say, that drove Boltzmann to suicide.

From my perspective, though, the philosophical side of the Second Law of Thermodynamics is just perfect. It’s a perfect metaphor for humanity as a whole — the whole damn human endeavour. After all — we all know that we’re going to die. And we all know that someday, we’ll be forgotten. No matter how famous you are, how much you manage to achieve, how many people adore you during your lifetime and afterwards — entropy will come for you in the end. We know that the seas will rise, that the sun will heat up and destroy our planet, and that all the information that proves we ever existed will be lost. We can tidy our rooms. We can write words. We can build relationships with each other, we can build homes, and we can build cities. We can rearrange the world into something that looks orderly to us, for a while. We can keep it all together, for a little while. But we know this is only temporary. Entropy will always increase. Everything will eventually give way to chaos. We will die, and be forgotten, and all the order that once was will be lost, and eventually the whole universe will be a dull, thermal equilibrium soup.

But in the meantime, there is still change; there is still flux; there are still flows: from the floods of heat and light in our lamps that light up the city skyline, to the flow of vibrating molecules that allows you to hear the words I’m saying now, to the rush of chemicals in your brain when you feel joy, or fear, or sorrow. These flows take place — always — governed by the Second Law; entropy and disorder are always increasing — but, in the meantime, they can still be beautiful.

We know it’s pointless. We know, in a sense, that it’s practically meaningless. But we keep going. Every morning we get up and do the things that we feel we have to do. We go through the world and we rearrange its chaos in a way that pleases us slightly more, when we can. We do every stupid thing that keeps us alive, and every stupid thing that makes us feel alive. It will not save us. We cannot win. We can’t break even. We can’t even get out of the game. But we keep going. Maybe we don’t know anything else. Maybe we’re foolish. Maybe we just like to spit in the eye of inevitability.

Maybe our purpose is just this: to fruitlessly, hopelessly struggle against the slow unravelling of everything.

Maybe this is what it means to be human. For a little while, we can fight the tide; we can resist disorder; we can bend the world to our will; we can stop everything from falling apart, in our own little ways. For a little while, we can look at the chaos around us, and tell ourselves stories about what it might mean. We are engines; we turn the random twistings and turnings of the Universe into narrative, into order. We can’t break the Second Law. But we can feel like lawbreakers. We can feel, for a little while, maybe a long while, like the game can be won.
Thanks for listening etc.

PLUGS

Until then, in your own little ways, keep fighting the fight against entropy. I know it won’t work. I’m here with you, fighting too. Until the end of our days.

Kelvin and Clausius

First, let’s talk about how Kelvin and Clausius’s statements of the Second Law are basically the same thing. The episode on the Second Law of Thermodynamics explained that:

Kelvin’s statement of the second law is that you can’t turn all of the heat you have into work. You can’t convert heat into work with perfect efficiency.

Clausius’s statement of the Second Law is that heat naturally flows from hot temperatures to cold temperatures. If you want to get heat to flow from cold places to hot places, you need to apply work.

Imagine that we had a Kelvin violator. (Steady on!) That is, we have some magical device that can turn heat into work, perfectly. (By the way, like I’ve said before, if anyone tries to sell you such a device, call bullshit! Obey the laws of thermodynamics!)

We have an engine that can turn heat from a hot source into work, with no losses. Imagine that we then use that work to drive another engine — pushing heat from a cold source onto a hot source. Remember, a heat engine is something that lets heat flow from hot to cold and generates work — so, running it in reverse, and putting work in, you can force heat to go from a cold place to a hot place. But the net result here is that we’re taking heat from the cold place, and dumping it onto the hot place. All the heat turns into work, and that work drives the same amount of heat from the cold place onto the hot place. So by violating Kelvin’s law, we have also violated Clausius’ law. Because the heat flows back into the hot place, we’re forcing heat to flow from cold to hot without doing work.

Similarly, if you have a Clausius violator — steady on — it also violates Kelvin’s law. To see this, imagine a system of engines again. One of them can violate Clausius’ law — it lets us take heat from a cold place and dump it onto a hot place. The other one is a normal heat engine, that lets heat flow from hot to cold and turns it into work.

So the normal engine is running. Some heat comes from the hot place — part of it goes into work, part of it goes to the cold place. But you can just use your Clausius violator to dump the heat the flows to the cold place back to the hot place. Since it violates Clausius, it can make heat flow against temperature gradients. So what’s the net result of these two machines running together? There’s no flow of heat from hot to cold — we’re compensating that with the Clausius violator. But the engine is doing work. So what we’re doing is converting heat into work, perfectly. In other words, if you add the machines together, you have a Kelvin violator. Kelvin and Clausius are equivalent statements — they imply each other. The fact that you can’t turn heat, perfectly, into work — it’s the same statement as the fact that you need work to get heat to flow from cold to hot. There we have it.

You can’t get out of the game

Hello, and welcome to Physical Attraction. Despite the impending and inevitable heat death of the Universe, and everything falling into the ultimate pit of chaos and entropy, we defiantly and disgracefully remain the show that tries to explain physics, one chat-up line at a time. In this episode, we’re dealing with the Third Law of Thermodynamics! Usually, I try to make these episodes so you don’t need to listen to anything else to understand them. But here, I think — at least the episode on the First Law will be helpful, because that’s where we defined what absolute temperature was. You’ll need this for today’s episode. And it will probably really help if you know about the Second Law as well! I will quickly recap, but I don’t want to bore the dedicated, hardcore fans who listen to every episode — so if it’s confusing, listening back to the first few episodes in the Thermodynamics series will really help.

So — so far, we’ve dealt with the first two laws of thermodynamics. The first one says that you can’t win — energy is always conserved, you can’t produce energy from nothing. The second one says that you can’t break even: all processes involve entropy, disorder, increasing — and losses to heat, which is like the disordered motion of atoms. So whenever you use an engine to do work — and, more broadly, in any process where energy flows — some will be lost: you pay the energy penalty. You can’t break even.

The third law of thermodynamics says that you can’t get out of the game. As if the game wasn’t rigged enough! But what does this mean?

Essentially, the third law of thermodynamics says that you can’t cool anything down to the absolute zero of temperature. Nothing can be cooled down to absolute zero — at least, not in a finite amount of time.

So we have already discussed that temperature is a measurement of how much energy molecules have — the higher the temperature, the more kinetic (movement) energy the molecules have, the faster they’re vibrating. Zero on the absolute temperature scale implies that the molecules have no kinetic energy at all — they’re completely still. But you can never reach this; it’s impossible. As far as temperature goes, you can’t get out of the game.

And you can see this by the Second Law of Thermodynamics, in a lot of ways. Remember we were talking about the Carnot efficiency? The most efficient you can be when you’re converting between forms of energy — heat into work — is the Carnot efficiency: and this is

efficiency = 1 — Tsink/Tsource

This is for a heat engine, that converts heat into work. But if you run this heat engine in reverse, and convert work into moving heat energy, then the efficiency becomes

1 / (Tsurroundings/Tcold — 1) = Tcold / (Tsurr-Tcold)

As the cold temperature approaches zero, the efficiency gets lower and lower. Eventually, when Tcold = 0, the efficiency is zero.


In other words, you can put in an infinite amount of work, without driving any more heat away from the cold place. In a bizarre, but very very real twist of the laws of physics, you need infinite energy to get down to zero energy.

You can sort of see how difficult it is to get down to zero energy altogether. Because heat always flows from hot to cold, you’ll always have a hotter region surrounding your region that’s trying to get down to absolute zero. It will be very difficult to stop that region from passing any energy towards the colder region. But this isn’t just a practical problem — that it’s difficult to perfectly insulate. It’s just that, the colder something gets, the harder and harder it is to pump more heat out of it. Eventually, it becomes impossible.

This can also be defined in terms of the entropy of the substance that you’re trying to cool. If we remember the definition of entropy in terms of a kind of disorder — or, alternatively, how many different microstates there are that correspond to the same overall macrostate. Remember that a microstate is like the specific arrangement of the atoms that make up your substance, while a macrostate is the bulk properties of the system like temperature, pressure, and so on. It’s a little bit like fans filling up a stadium. If the stadium is half full, there are plenty of ways to arrange the fans to keep it half full — plenty of microstates that correspond to that macrostate. If the stadium is completely empty, however, there’s only one way to arrange the zero fans. (Maybe it’s a Nickleback concert?) This is a very low-entropy system.

So there’s some entropy associated with the temperature of the system. This is because when the system has thermal energy due to its temperature, the molecules have more possible energies, velocities, momenta etc. that they can have. But even when molecules have no thermal energy, there can still be entropy due to the arrangement of the molecules. If a pure substance has a perfect crystal — such that there is only one possible position for all the molecules — then, at absolute zero, it has zero entropy. But such a perfect crystal doesn’t exist.

Why does this mean that absolute zero can’t be attained? To understand this, we need to know a little bit more about thermodynamics and state functions. Basically, cooling processes usually work as follows. You flip between two states for the system. One flipping process reduces entropy at constant temperature — one reduces temperature at constant entropy. You can imagine it like the steps of an escalator, stepping down, between two graphs. One is the S(T) function for one state; one is the S(T) function for another.

So one way you might do this is by changing the magnetic field, which is adiabatic demagnetization. Another way you might do this is by allowing your gas to expand into different chambers — the volume change is what causes the change in temperature/entropy. Basically, whatever you’re doing, you’re twiddling the system so that some of the entropy that used to be associated with temperature is now associated with something else, pushing the temperature down. Then you remove entropy by setting your dial back to zero again — it’s a cyclical process that reduces temperature and entropy in stages.
But if, regardless of your state, you have the same entropy at absolute zero, you have a problem. That escalator will need to have an infinite number of steps. Imagine the two curves intersecting on the axis, and you’ll see that you can’t draw a finite number of lines of constant temperature or entropy between them to get to the intersection point at absolute zero. I’ll put a graph in the show notes to make this clearer.

Entropy tends to a constant value (which is only 0 for a perfect crystal) at zero temperature. Hence no process can get to absolute zero in a finite number of steps.

Despite this, providing you’re willing to go through a lot of steps, you can get down to some astonishingly low temperatures. 2.73K is the background temperature of the vast wastes of outer space, which is due to the cosmic microwave background radiation. But we can get colder than that — we can cool to well below the temperature of this void.

By the way, it should be clear to those of you who have been listening so far that you really can’t cheat thermodynamics. So, of course, whenever we cool anything down, the second law is still not violated. Entropy still increases. And, in fact, more heat is produced. You can imagine it in simple terms — you need to do work to drive heat from a cold place to a hotter place. Some of that work will be ‘wasted’ as heat, so the total amount of heat that’s produced is more than what’s lost from the cold place. Your fridge-freezer pumps out plenty of heat at the back. There’s no way around this. And, similarly, if you’re listening to me and neurons are making connections in your brain that’s (perhaps, hopefully) becoming less disordered and containing more information… this process too also means entropy will increase somewhere else, by a greater amount. You can’t cheat thermodynamics.

We already talked about adiabatic demagnetization as a popular method of cooling. There are also more refined and specialised methods. One of them is laser-cooling. It might seem counter-intuitive to imagine that zapping atoms in a gas with lasers can be used to cool them down, but it’s true. Here’s what happens, on the atomic level. Usually, the atoms are trapped by some clever method so that they can’t escape. Imagine James Bond being pinned down and strapped to the laser table, if you like. (SCORPIO YOU’RE TOTALLY MAD.) The scientists zap an individual atom with an individual laser photon. The photon is then re-emitted by the atom in a different direction. So far, so good — but what actually happens is that the photon is re-emitted with a slightly shorter wavelength than before. This can happen due to something called Stark shift — the electric field of the photon itself actually distorts the energy levels of the atom, making them split wider apart.

Alternatively, this can happen due to Doppler shifts. The Doppler shift will be familiar to anyone who’s ever heard an car rushing by. As the car approaches you, the relative movement between you and the car squishes the wavelength of its sound wave — so it sounds higher pitched. As it moves away from you, that wavelength is stretched, so the pitch gets lower.
The same can happen when an atom in a gas has some velocity in some direction. The Doppler shift means that it can absorb a slightly longer wavelength photon, and then emit one with a slightly shorter wavelength. In other words, it emits a photon with more energy than it absorbs.

We all know energy is always conserved — so that extra energy actually comes from the kinetic energy of the atom. That means it slows down, and cools down. Using incredibly precise techniques, researchers have been able to cool things down to temperatures of a billionth of a kelvin. Remember that zero kelvin is that unreachable, absolute zero — and that the background temperature of the Universe, empty space itself, is 2.73K because of the cosmic microwave background radiation leftover from the start of the Universe. So this is almost as cold as you can imagine going.

Or is it? Occasionally, you get reports in the news that they’ve attained NEGATIVE absolute temperatures. This is always a little bit of a fudge, though; because it depends what measurement of temperature you use. When they talk about negative temperatures, they don’t actually mean that the area is colder than absolute zero. In actual fact, it’s warmer. Confusing, right? That’s why you need to read past clickbaity headlines.

One way of understanding this is by imagining a set of 100 atoms, with two possible energy states. This means all the atoms are in either one of two energy levels. Remember what we learned about entropy — it’s how disordered a system is — and temperature is linked to entropy. In fact, in one definition of temperature, it’s how much energy you need to change the entropy by a certain amount.

If all the atoms are in the lowest energy level, that’s about as ordered as your system can get. Entropy is zero; there is only one way of arranging the atoms so that they are all in the lowest energy state. But, if you give the system a tiny bit of energy — enough to kick one atom into the higher energy level. Entropy will massively increase. There are a hundred ways of arranging things, if one out of a hundred atoms is in the higher energy level. So entropy has gone up, and it only took a tiny amount of energy to do it. This is one definition of the low temperature — the amount of energy we need to change the entropy is low.

But if you have this definition of temperature, you can imagine a different scenario. The maximum entropy scenario is half and half — half of the atoms in one energy level, half in the other. This is kinda the most disordered way this system can be. You can see that, if all the atoms are in the higher level, we have zero entropy again — there is only one way of arranging the hundred atoms. So half and half maximises the entropy.

So now imagine that we do have half the atoms in the lower energy state, and half the atoms in the higher one. Then we somehow prod the system, pushing atoms from the lower energy state up into the higher one. We give the system more energy, but the ENTROPY goes down.

So if the temperature is something like change in entropy over change in energy, the temperature is now negative. What that means, literally, is that you need to REMOVE energy from the system to INCREASE the entropy back to its maximum — the half and half state.

So in a sense, the ‘temperature’ is negative. But, weirdly, it’s because this region is actually MORE energetic than average; it has MORE atoms in the higher energy state than you would in equilibrium, to maximise entropy. Scientists have been able to create similar states to this, and sometimes this gets reported as “negative temperatures” — but it’s not the same as saying you need to put energy into the system to get up to absolute zero, which is what you might expect. In fact it’s very different. If you put a negative temperature object next to a positive temperature object, the energy would flow from the negative temperature one to the positive one. And you can see that it only works because we are limited in the energy levels that our atoms can possibly have. You couldn’t have negative temperature for, say, molecules of a gas that are moving around — because you can always add more energy, and you can always increase the entropy. It’s only in certain types of quantum mechanical system — where stuff is limited in the energy levels it can occupy — that you can produce these negative temperatures. Really, we should say something like “negative entropic temperature”, or “negative quantum temperature”, because otherwise, I think it’s quite misleading. Absolute zero remains the barrier.

All this talk about negative temperatures might have you thinking: okay, 0K is the coldest that anything can be. Is there a limit to how HOT things can be? The answer is, maybe. It might just be that temperature stops making sense at really high energies. Essentially, at really high temperatures, the laws of physics start to change. One example people talk about when they talk about this idea of “absolute hot”, instead of absolute zero, is the Planck temperature. That’s around 10³²K, or 142 quintillion kelvins to be more precise. This is a really strange concept, but, essentially, if anything ever reached the Planck temperature… gravity would be as strong as the other four forces. Our current laws of physics cannot describe matter in this state. We have no idea how ANYTHING would happen at 10³²K. It’s too hot, hot damn. And yet we also believe that the Universe — however briefly, perhaps 10^-42 seconds after the big bang, was in this temperature state. Things must have been very strange indeed. And because we don’t know — because this is unreachable — because even heating the smallest volume of space to this kind of temperature would be the same as the conditions immediately after the Big Bang — some have even speculated that it might be, in some sense, the same as absolute zero. Temperature is a circle. That’s pretty wild.

But there are other answers for what the maximum temperature you can get might be. Again, it comes back to the rather shaky and many different definitions of temperature. One I quite liked was the Hagedorn temperature, beyond which, you can’t heat hadrons any more. Hadrons are the types of matter that are made out of tiny subatomic quarks. It turns out that, if you heat them beyond a certain temperature — instead of “getting hotter”, you just start producing more hadrons. In other words, that energy is converted into more particles, rather than “heat energy” in the particles that exist. In some sense, you’ve boiled the particles down into a soup of quarks — and when you add more energy, rather than heating the soup, you add more quarks to the soup, which produce “fireballs” of new particles. So, in a sense, the temperature of this kind of matter stays the same, no matter how much energy you put in — at least, until something even wilder happens.

The amazing thing about this is that we can actually reach the Hagedorn temperature at places like CERN, in the large hadron collider. Briefly, the hadrons there get so hot that any new energy just goes into making more hadrons, rather than changing the temperature. In a sense, temperature has maxed out. But there are other ways to measure it — say, the entropy of the produced particles — that will show it continuing to go up.

You’ll probably remember from our episode on superconductors that the physics of matter at extremely low temperatures can get very interesting. For superconductors, for example, the electrical resistance suddenly drops to zero. You can pass currents through a coil of superconducting wire and they’ll still be there, without losing energy, billions of years later. There are other fascinating properties of matter that emerge at low temperatures — including superfluids and other such exotic states of matter. And, perhaps, by pushing matter closer and closer to absolute zero, we will find more and more strange properties that can arise, when quantum effects start becoming very important. Essentially, when you probe matter at these tiny temperatures, the fact that energy is quantized — it exists in little packets, and is not continuous, you can’t have any value you like — becomes very important. The energies you’re dealing with are about the size of a quantum packet, rather than so many packets that it looks continuous, which is a big part of why so many strange properties start to arise. But more on this will have to wait for our big quantum episodes.

So there you have it: the laws of thermodynamics. Energy cannot be created or destroyed; in an isolated system, entropy will always increase; absolute zero of temperature is unattainable. You can’t win, you can’t break even, and you can’t get out of the game. These laws — discovered often by thinking about gases and idealized heat engines and so on — discovered centuries ago — have proved amazingly resilient over all kinds of new physics. We see that they are far more fundamental than the original scientists who came up with them may have thought, and almost all theories — even really wild and exotic states of matter — seem bound by these rules. Their consequences, their action, ranges from the motions of the smallest atom to the ultimate fate of the Universe, and everything in between. Like it or not, we’re all playing the thermodynamic game; and there’s no way out.

Thanks for listening etc.

www.physicalattraction.libsyn.com