The bulk of the scripts from the nuclear fusion episodes of the podcast are released here. You can find the original episodes here.
Nuclear Fusion: Pinches, Stellarators, and Perhapsatrons
Hello and welcome to the next chapter in our nuclear fusion odyssey. Last time, we talked about the first time humans were able to mimic nuclear fusion, that process that fuels the stars — in the course of making weapons. We discussed Edward Teller’s fanatical obsession with a “Super” bomb, then him stealing the credit from Ulam for actually designing it. After concerns with fallout from some of the nuclear tests lead to the Test Ban treaty, Teller continued to draw up plans in a bizarre programme — “Operation Ploughshare”- throughout the 1950s and 1960s. The stated aim was to use hydrogen bombs to carry out civil engineering projects, because I guess conventional digging is way more boring than nuclear-bomb assisted digging. But in reality, it was probably to allow weapons development to go ahead and continue to justify large research and development budgets.
Teller’s views were far from typical amongst scientists. We’ve described, in many of our episodes about nuclear weapons, how those early physicists faced a particular dilemma. At the time, it was a unique dilemma: these days, it’s ever-more common. Those who research CRISPR gene-editing, or artificial intelligence, or geoengineering, or nanotechnology, will understand this dilemma. The technology you’re looking into is dual-use. It has the potential to do great good, and also to do great harm. What’s more, the way you look into it — the decisions you make, the possibilities that your research might open up, and the potential for unintended consequences — can influence whether the technology is used for good or ill. Imagine those nuclear physicists who had seen their breakthroughs become the cold mathematics behind the devastation of Hiroshima and Nagasaki, and now plunge the world into a thermonuclear standoff. They have no doubt that understanding the secrets of nuclei could unlock tremendous, world-changing forces. And they are desperate to show that stealing this fire from the Gods was justified. That the human species can harness these powers for good. That, rather than the first step along the road towards our self-destruction, splitting the atom can help humanity towards an enlightened world; a scientific world; a world where nuclear power provides immense benefit for the whole species.
Of course, this is difficult, because — technically speaking — even in the Sun, nuclear fusion is impossible.
The Sun’s plasma consists of nuclei and electrons that have been separated; the nuclei are fully ionised by the immense heat. We’ve already described in previous episodes how you need to supply nuclei with an immense amount of energy to overcome the so-called Coulomb barrier — the electrostatic repulsion between the protons in the nuclei. As any pair of nuclei approach each other, they rapidly decelerate as their kinetic energy is converted into electrostatic potential energy due to the charges being forced closer together. So you need your nuclei to have an immense amount of kinetic energy before they can dream of getting close enough — on the order of femtometres — for the strong nuclear force to take over and fuse them together. This releases energy, which can heat the plasma further. If the rate of fusion is fast enough for a self-sustaining reaction, the plasma will “ignite” and “burn” at a steady temperature, which you can usefully extract heat energy from. This is the dream of a nuclear fusion reactor; replicating the same process that powers the Sun here on Earth.
But in the Sun, it’s — strictly speaking — not possible. At least, not possible with classical mechanics. Even with the immense heat and pressure in the heart of the Sun, where temperatures regularly get up to 15 million degrees Kelvin, the kinetic energy of the protons is simply not enough to overcome that Coulomb barrier. You can do simple calculations — e.g with the widget on Hyperphysics online — that tell you that the approximate temperature required for the average nucleus to have enough energy to overcome that Coulomb barrier is 4.6 billion Kelvin. Even the heart of our Sun is three hundred times colder that it would need to be for your average nucleus to be able to fuse.
The weirdness that allows for our Sun to burn, and hence for the Universe as we know it to exist, is of course — quantum weirdness! You may have heard of quantum tunnelling; we’ll get into it more in future episodes, but the essential idea is that the precise position of particles has a certain degree of uncertainty to it. In fact, it doesn’t make sense to say that a subatomic particle is in a particular location; you can’t pin it down with a precise set of coordinates. Instead, the most complete description you could hope to give is a probability distribution over space. The particle has a 10% chance of being in this range of positions, a 20% chance of being over here, and so on. Instead of a particle, which we imagine as a small point-like object at a specific location, you have a “wavefunction” of probabilities: places that the particle could be.
It turns out that this means particles have a finite probability of turning up in regions where they’d be classically forbidden to tread. The same thing happens with alpha-decay, which you might remember from our radiation episodes, or the episode on Rutherford’s Gold Foil experiment. Those alpha particles have some small probability of being sufficiently far outside the nucleus to escape — and that’s precisely how those decays work. This phenomenon, where particles appear to “tunnel” through energy barriers that should be impossible for them to escape from, is quantum tunnelling — and quantum tunnelling effectively lowers the energy barrier and allows fusion to occur in stars like our Sun.
So when you’re considering the immense challenge of making fusion work in laboratories, on Earth, bear in mind that it’s only the strangeness of quantum mechanics that allows the damn thing to work in stars.
First, you needed some ability to heat plasma to temperatures comparable to that at the heart of the Sun. Then, you needed some way to hold that plasma at immense temperatures and pressures — sufficient to ignite a self-sustaining fusion reaction — without the whole thing exploding. These are not easy challenges.
Very roughly speaking, what you’re looking for is something that can satisfy the Lawson criterion for a self-sustaining reaction. This is sometimes called the Lawson triple product, and has been rewritten and estimated for many different types of approach to fusion. It was first declassified in 1957 — as the focus began to shift from achieving high temperatures to confining those hot and dense plasmas for sufficient time for a self-sustaining reaction to arise. The triple product is the temperature of the plasma, multiplied by the density of the plasma, multiplied by the approximate lifetime of its confinement. That confinement time is set by how quickly the plasma loses energy, due to radiation and mass loss from the plasma. To achieve a self-sustaining reaction, you need a very dense, very hot plasma to be confined for a significant amount of time. You can make the plasma cooler, but then you need it to be denser, or to radiate less energy away. Decreasing any one of these must come at the expense of increasing the other two.
Yet when you make matter extremely hot and dense, the result is an enormous amount of pressure on the walls of your container. The temperature is also intolerable. The element with the highest boiling point is Tungsten, at a frankly pathetic 6000K or so. Diamonds can’t even get you that high. For a start, if you heat them in air, they burn and emit carbon dioxide. If there’s no oxygen supply, first, they turn into graphite, a more stable allotrope of carbon, and then they sublimate — that is the solid burns away straight into a gas — at around 4000K. When physicists first started to calculate the possibilities of deuterium fusion, they thought the temperature required for a self-sustaining reaction on Earth could be as high as 500 million Kelvin.
No substance on Earth could possibly contain a plasma that hot without being swiftly vaporised. So if plasma were held in any known ‘container’, the confinement time would be far too small to be useful, and any small amounts of energy you could produce from fusion would quickly dissipate into vaporising the container.
You could go down the Edward Teller route of using atomic bombs to explosively compress deuterium fuel. But hydrogen bombs were not the peaceful energy source of the future. As George Thomson, a British physicist who will become crucial to this part of the story, wrote: “One answer would be to make the confinement time very small by using extremely high densities; but such a device would be the same as a hydrogen bomb, and we are not interested.”
It’s difficult to pin down the earliest origins of this idea, but at least by 1946, physicists were experimenting with what seemed to be the only reasonable solution: confining the plasma using magnetic fields. Sufficiently strong magnetic fields can, of course, “levitate” charged particles — in fact, Ed Geim who won the Nobel Prize for discovering Graphene famously also used a particularly strong field to levitate a frog via diamagnetism.
With a magnetic bottle, your plasma needn’t touch anything. The laws of electromagnetism — Maxwell’s equations — were already known and quite well understood at this time. The Lorentz Force Law tells us how magnetic fields act on charged particles. Charged particles actually only feel a force from magnetic fields when they move through them; when a charged particle moves through a magnetic field, a force acts on it that’s perpendicular to its velocity and to the magnetic field itself. If your magnetic fields are aligned juuust right, you can hope to confine the plasma for long enough to get a reaction going — without needing something the size of the Sun. This is the field of magnetic confinement fusion.
So imagine you have a magnetic field pointing into a piece of paper, through the plane of the paper. Then imagine you have a charged particle moving parallel, along the piece of paper — say, from the bottom of the paper to the top. The Lorentz force will then tend to deflect it to the left or right, depending which way your magnetic field is pointing. Because the force is always perpendicular to the velocity, it’s essentially precisely the condition for circular motion; in just the same way as the Sun constantly tugging on the Earth perpendicular to its velocity drags it into a roughly circular orbit, so you can use the Lorentz force to confine charged particles in their orbit. This is, of course, similar to the techniques used at particle accelerators like the Large Hadron Collider, and we can deduce things like the temperatures of interstellar gas and dust based on how they rotate through the background magnetic field of the Galaxy.
In a similar way to how charges moving through a magnetic field generates a force on those charges, moving charges themselves generate a magnetic field that’s perpendicular to their motion. For this reason, a coil of wire — sometimes called a solenoid — generates a magnetic field that goes straight through the heart of the coil when a current passes through it. Again, this is very well understood: rotating coils of wire in magnetic fields to produce electricity is the basis for all electrical power generation aside from solar energy — and would be for a nuclear fusion reactor, too. The only difference is where you get the heat from.
So if you wrap a coil of wire around a tube and pass a current through it, you should produce a magnetic field that will go along the axis of the tube. You can then imagine plasma spiralling in a tight little helix, around that central axis. This is the natural motion for particles in a magnetic field. Imagine injecting charged particles into a magnetic field. In the direction perpendicular to the field, they’ll be pushed into circles that orbit the magnetic field direction. Parallel to the magnetic field, no force acts on them, and so they will just carry on moving. The result is a sort of corkscrew, helix motion, along and around the direction of that magnetic field.
Less than a mile away from where I’m sitting and typing this right now, in Oxford University’s Clarendon Laboratory, some of the earliest magnetic bottles for fusion plasma were designed and tested, as early as 1946. British physicists were amongst the first to come up with a design for a magnetic bottle, based on a concept called the “pinch” effect.
The idea here is to start with a cylinder of plasma. Remember that plasma is just a hot mixture of electrons and nuclei — a bit like a gas, but fully ionised. Because the plasma is full of charged nuclei and charge electrons, it’s made of charge-carriers, and you can seek to pass a current through this tube of plasma in a similar way as you would a wire.
When you pass a current through the tube of plasma, though, the result is a magnetic field perpendicular to that current. It tends to force the plasma inwards, crushing the cylinder onto its axis. Fans of high-voltage electricity will sometimes use a similar effect — the crushing-inwards force of the magnetic field — to crush aluminium cans. You can even see it in the effects of lightning strikes on lightning rods — the sudden shock of the vast current compresses and pinches the lightning rod, bending it out of shape.
The idea with a pinch fusion reactor is to use it to compress and contain the plasma. As an added bonus, the compression of the plasma makes it denser and heats it up. You can imagine that, perhaps, if you pump a sufficiently high current through the plasma, the pinch effect might just be enough to cause a self-sustaining fusion reaction in the centre of the reactor.
This idea, and variants of it — passing current through the plasma to squeeze and compress it into a fusion-ready state — were the very first designs for fusion reactors. The first patented fusion reactor was in 1946, by British physicists Thompson and Blackman, and based on this design.
Of course, one issue with a tube like this is that the plasma isn’t confined at the ends and spills out. One solution you might come up with, of course, is to make sure that there is no end: you have a torus, a doughnut shape tube, surrounded by a coil of wire. You can then force the plasma into a circular orbit and confine it for a longer time.
But there is an issue with a simple toroidal design, a donut-shape. A donut has an inner and outer ring. Clearly, the radius of the inner ring is smaller than the outer ring. If you imagine wrapping a coil of wire again and again around the edges of a donut, you’ll quickly realise that — on the outside of the donut, the loops of wire are spaced much further apart than they are on the inside of the donut. This means the magnetic field is stronger towards the centre of the torus, where the loops of wire are closer together as they’re only spread across the small inner radius of the donut. The nice, ideal, even magnetic field you had before is ruined; now you have a field gradient, which forces the electrons and nuclei in opposite directions, and before long your ordinary torus has nuclei and electrons drifting out of the walls of the container. Fermi pointed out that eventually, for reasons of pure geometry, this arrangement can’t work: the plasma will separate and escape. The torus leaks from its sides, and you can’t get good confinement.
There’s a famous story that Lyman Spitzer — who would go on to be the founder of one of the earliest laboratories solely dedicated to nuclear fusion in the United States — hit upon what he thought might a solution while he was riding a ski-lift. He was an avid mountaineer, and he’d just received a phone-call about a blockbuster New York Times story on nuclear fusion — one that we’ll cover in a bonus episode shortly relating to the activities of a certain Roland Richter.
Perhaps there is something to be said for an era before gadgets allowed us to be occupied at all times, and a situation like being stuck on a ski-lift meant that you were forced to keep yourself entertained. The story goes that Spitzer — who happened to be a genius astrophysicist who studied the plasma that is the “interstellar medium” which surrounds us in our galaxy — came up with a new design. This would be based on a figure-eight design, with two doughnuts connected together. The idea here is pretty simple: if you imagine the plasma travelling through the figure eight, you can see that it will pass through one of the loops clockwise, and one of the loops anti-clockwise. This is important, because the Lorentz force that acts on charged particles is very concerned about whether they’re travelling clockwise or anticlockwise. The drift towards the edge of the tube is still there, but in opposite directions in each half of the tube. If you construct your figure-eight just right — or so Spitzer hoped — the drift will cancel out, and you’ll have decent enough confinement to make a working fusion reactor.
He assumed that the predominant way that energy would be lost from the plasma would be due to Bremsstrahlung — so-called braking radiation. When charged particles accelerate or decelerate, they emit radiation — this is how matter and light interact, and it’s why, for example, we can generate radio waves. The electrons in the plasma are charged particles, and they’ll collide with nuclei and other charged particles, accelerating and decelerating — thus releasing radiation, and losing energy. If you assume that this is how plasma loses most of its energy, you can get to a figure of around 50 million degrees Kelvin. That’s the temperature at which you break even — the fusion reactions you’re producing generate enough energy to cancel out the losses due to radiation. It was still five times hotter than any temperature that had ever been generated in the lab at this time. To produce reasonable power, you’d need at least 100 million
Spitzer got a grant in 1951 and began work on this new fusion reactor, giving it the grandiose title of the “Stellarator.”
Compare this rather fancy title with the name that physicist James Tuck named his devices, constructed to exploit the pinch effect, at Los Alamos. He was a veteran of both the Manhattan project and the early experiments with magnetic confinement fusion, and in 1951 also managed to get funding to build his reactor based on the pinch effect. Tuck was sceptical of Spitzer’s optimism — he felt that “a self-sustaining thermonuclear reaction should not be thought of until a detectable reaction has been crated and the problems of ionization, conduction, and the effects of magnetic fields have been worked out on a small scale.” He was concerned that in a large, spatially spread-out plasma, thermal conduction would take too much energy from the Stellarator’s plasma — and hence the pinch, which involved fusion occurring in a much thinner filament, was a more attractive prospect. Spitzer felt that his device was better — it worked steadily, constantly, while the pinch device worked only in pulses, which Spitzer felt would prove impractical for generating power. Tuck and Spitzer met in May, discussed their designs, and both came away feeling that they were right. Fusion, in these early days, was already a camp divided. Tuck, slightly less confident in success, called the pinch device he was working on the Perhapsatron. Tuck, who had been amongst those physicists dazzled and horrified by the Trinity atomic bomb test years before, would spend the rest of his life pursuing fusion in some form or another.
Alongside this, there was a design that was tentatively called the magnetic mirror, which would confine the plasma to a tube by kinking and curving the magnetic fields at the cap. If the magnetic fields were made stronger at the ends of the tube, and weaker towards the middle, then it might serve to “reflect” the plasma — bouncing it back and forth in the tube so that it can be confined for long enough to cause a sustainable fusion reaction.
The late 1940s and early 1950s must have been an incredibly exciting time for nuclear fusion. Just think about the historical context that these scientists were dealing with. Just thirty or forty years before, no one alive knew that there was such a thing as a nucleus. In just eleven years, people had gone from having no idea that nuclear chain reactions could possibly exist to ending the most devastating war in history with two of them. Nuclear physics was a field where ideas were going from inconceivable to impossible to works-in-progress to prototyped to world-changing deployment in the span of ten years. And now, in 1951, physicists were a year away from finishing a bomb that used nuclear fusion, and had three potentially competing designs that were hoping to use magnetic confinement fusion to become the energy source of the future.
In Spitzer’s initial research proposal for the Stellarator, he suggested that it might provide 150MW of power. That would put it on a par with some conventional power plants — yet you wouldn’t depend on fossil fuels. Nuclear fusion reactors also have a large advantage over nuclear fission reactors in that they are — sort of — fail-safe. A nuclear fission reactor is — more or less — a controlled explosion. Your chain reaction is moderated by the type of fuel that you put into the device, or the control rods that are raised and lowered to change the rate of the reaction. As people at Chernobyl found out, if you fail in your efforts to control that chain reaction, the results are explosive. But, in magnetic confinement fusion, if your efforts to control the plasma fail, what happens? Well, that plasma might crash into and destroy the walls of your reactor, and probably destroy the reactor itself. That’s going to cost you an awful lot of money, and is obviously a Bad Thing, but it won’t be as nasty as an accidental meltdown at a fission reactor can be.
The very worst case scenario is that some of the tritium gas produces leaks into the surrounding environment, and you have to shut down the site and move somewhere else for a few decades, if the secondary containment vessel for this gas also fails. But it’s nothing like Chernobyl, or any other disaster; there’s no potential for a catastrophic explosion.
It’s not true to say that nuclear fusion results in no nuclear waste; but many of those who’ve analysed the problem suggest that it could be far more limited than what you get from fission reactors, and ultimately storing and handling that nuclear waste adds a huge amount to the cost of running and maintaining a fission reactor. Furthermore, since fusion technology alone can’t be used to create a nuclear bomb — you’d really struggle to weaponised magnetic confinement fusion — this new source of energy could likely be shared with the world, without worrying about whether countries that claimed to want atoms for peace were really interested in atoms for war.
Already in the early 1950s we’re starting to see hyperbole about how fusion will soon power the world, and move us into a new nuclear age of energy too cheap to meter — unlocking the boundless potential of the Sun. Can we really judge people too harshly for getting over-excited about the potential for new technology? There are entire religions based around this now.
So next time, we’ll have a bonus episode which will go into the first #FakeFusion detour from progress — Roland Richter’s Argentinian gambit. As nuclear fusion grows in the public consciousness as a solution-to-many problems, the temptation to make… somewhat premature claims about having discovered it also grows. This is just the first example — but it’s historically important, because it actually sparked a lot of the funding into fusion research that happened in the 1950s, as governments became concerned they might be falling behind. In fact, it was this very story that Spitzer was thinking about when he sat on his ski-lift and came up with the Stellarator. So never let it be said that fraudsters and failed experiments can’t change history: just not in the way you might expect.
Then, once we’ve dealt with that little detour, we’ll deal with the early designs of stellarators, pinch-fusion reactors, and magnetic bottles — and the 1958 experiment that led a national newspaper, the Daily Sketch to print a banner headline which said:
I’ll see you then.
Nuclear Fusion BONUS — Juan Step Beyond
Roland Was A Richter
When you see a news article — and I’ve written at least one of these myself — that says “such-and-such a person says we’ll have working nuclear fusion within 15 years” — you need to be very careful about the definition.
Nuclear fusion — as in, humans getting individual nuclei to fuse together — has been a reality since 1934, when Rutherford and his collaborators used a particle accelerator to bombard deuterium nuclei into each other. It was using the results from this experiment that they were able to measure the interaction cross section — the likelihood — for such an interaction to take place between deuterium nuclei. But it’s no good for making power, as the vast majority of nuclei bounce straight off each other and don’t fuse together. You require a fuel at a very high temperature, undergoing lots of collisions, and producing sufficient energy as a result of fusion interactions that the whole thing is self-sustaining and will produce more power than it requires to run.
Nuclear fusion for energy was mostly just a pipe dream in the minds of a few physicists in scattered laboratories across the US and Europe when, in March 1951, Argentine President Juan Peron announced that he’d cracked it.
Peron had hardly always been a friend of the sciences. When he first came to power, in 1946, he’d purged Argentina’s universities — resulting in the firing of ten thousand professors — it had started a huge rift between the academia and the Peron regime. Yet Peron, although perhaps no friend to scientists in particular, had the politician’s wit and the military background to understand the advantages of keeping ahead of the scientific curve.
Despite this, efforts towards a nuclear programme in Argentina were pretty slow until they were noticed in 1947. In the political news magazine, the New Republic, William Mizelle wrote an article about “Peron’s Atomic Plans”.
With world famous German atom-splitter Werner Heisenberg invited to come to Argentina by Peron’s Government and with a major uranium source discovered in Argentina, that Nation is launching a military nuclear research program to crack Pandora’s box of atomic energy wide open. Argentina’s determined atomic adventure and its frankly military purposes cannot be dismissed as the impractical dream of a small nation.
There was immediately considerable international pressure on Argentina to abandon its nuclear programme — at this point, the US and the USSR were heading towards a shaky duopoly, and certainly neither of them wanted nuclear weapons to be developed by random countries in Latin America. Ironically, of course, this just made Peron all the more determined to develop the nuclear programme, which more or less existed only on paper at that point.
It was into this context that one Ronald Richter showed up. Richter was one of the many emigres, formerly living under the Nazi regime who found their way to Argentina in the years following the war. This was perhaps part of the reason why the New Republic were taking Argentina seriously — of the German and European scientists who hadn’t been scooped up by the US or the USSR, many of the remainder had gone to Argentina.
But Richter was hardly one of the academic elite that the US and Russians had competed over. When he first wanted to study his thesis — which, depending on the source, was either awarded or not awarded — he wanted to look into the non-existent “delta rays” which emerged from the heart of the Earth. “He had read about them… obviously not in a scientific journal,” recalled one professor. One source suggests that the delta rays that Richter was investigating were actually just X-rays that were scattered by the ground. He was not famous for his scientific aptitude and had no major publications to speak of.
But Argentina was looking for scientists; it was looking to compete on the world stage. With the USSR and the US grabbing a great deal of the major scientists, Argentina was left with Richter, and Peron told him to come up with a project. Richter told him that he could make nuclear fusion happen — providing unlimited power for Peron’s government — providing he had an essentially unlimited budget.
Apparently, Richter — who had to communicate with Peron in broken English or through a translator — struggled to get his precise point across; but one thing that he did was continue to point at the flag, the Sun in the Argentine flag. “I can bring you this. I can bring you the Sun.”
Despite his lack of scientific credibility, Richter must have had something about him — even if he was just a good salesman. Perhaps the same wild enthusiasm — which failed on the professors he tried to persuade about delta rays — worked on Juan Peron. Peron, for his part, was probably blinded by the dream of being the first to achieve nuclear fusion, and to provide all the energy that Argentina would need to industrialise. He gave Richter a blank cheque for his research.
In classic mad-scientist fashion, Richter actually got an entirely secret island to work on his project. The story goes that Richter flew around Argentina on an aeroplane and got to point to his favourite location, which was a small island in the middle of a lake with plenty of access to freshwater. The Isle of Huemul was converted into Ronald’s personal laboratory. The reason for the secrecy was probably a combination of avoiding international scrutiny, and also avoiding the scrutiny of Argentina’s mainstream scientific community — most of whom disliked Peron. But for Ronald, those years must have been great. He had four hundred staff on this beautiful island, and built an 11m tall bunker filled with various bits of scientific apparatus. All in all, the Huemul project — from start to finish — would cost around $300m.
According to Enrico Fantoni:
“When, in mid-construction, it was determined that some radial 5cm pipes leading to the 1,400-cubic-metre reactor’s core had been installed incorrectly, Richter made the builders tear down the entire cement structure and build it again from scratch.”
One wonders if he was stalling for time at this point, because after three years, Peron must have been a little concerned about where his millions of pesos were going.
By March 25, 1951, however, they apparently deemed the secret project ready to be shown to the world. You can still see archive footage of Peron and Richter appearing together, to the delightful applause of the gathered attendees. Peron claimed that the experiments had brought matter to a temperature of millions of degrees, and that the fusion energy plant was in the process of making “artificial suns on Earth.” Richter, for his part, was no less enthusiastic about the project. “What the Americans get when they explode a hydrogen bomb, we in Argentina achieve in the laboratory and under control. As of today, we know of a totally new way of obtaining atomic energy… for the very first time, a thermonuclear reaction has been produced in a reactor, that uses no materials thought to be indispensable.”
The reaction from the mainstream scientific community was swift and harsh. Sure, it was an era of scientific breakthroughs, but the idea that some random physicist no-one had ever heard of had achieved nuclear fusion reactions essentially on his own, with no input from the mainstream scientific community, and providing very sketchy details… something didn’t add up.
Scientific discoveries were typically group efforts, announced in peer-reviewed literature.
Perhaps there was a degree of arrogance on the behalf of the Americans; after all, in March of 1951, the only research into fusion was for weapons. Spitzer hadn’t come up with the Stellarator idea, and the Perhapsatron had not yet been proposed or funded. The idea that the US might be beaten to the punch by the Soviets was one thing — but in a field that they hadn’t even begun to take seriously, and by Argentina — that was quite another.
Former Manhattan Project physicist Ralph Lapp summed up the initial reaction when he said: “I know what the other material the Argentines are using; it’s baloney.”
Yet some other physicists were less keen to dismiss the project entirely. On the 1st of April, the New York Times reported that a French physicist was supporting Richter’s claims and that he had recently performed experiments that were very similar. This seems a little unlikely, given that no-one really knew what Richter’s experiment was. The details, of course, were shrouded in secrecy.
Richter had leaked a few. He called the device the “thermotron”, and claimed that it worked by fusing deuterium and lithium together — creating micro-fusion bombs that were contained within the concrete walls of the reactor. He also claimed the temperatures required were only 5000 Celsius — conveniently just within the range where you might believe that the “fusion” could be contained in a concrete bunker, but substantially below the theoretically calculated one based on the energy requirements.
So how the hell was Richter’s fusion supposed to work? Like plenty of these dodgy experiments, it’s not entirely clear. According to one account, the best he did was to burn hydrogen in an arc of electricity. Hydrogen will burn — it reacts with oxygen to produce water vapour, and the “squeaky pop” experiment is well known to school-children everywhere. Apparently, Richter then bombarded the burning hydrogen with lithium ions, which caused an explosion that cracked the concrete structure. One version of the experiment took place in a tube, which was supposed to reflect energy to keep the reaction going. It seems that at one point, Richter ordered the original reactor to be torn down in favour of building a magnetic confinement device, but nothing ever came of it. There are also tales of readings on a Geiger counter that suggest radiation levels were high in the concrete bunker. Other, more charitable accounts suggest that Richter did have some new ideas — including blasting the plasma with sound waves to compress and heat it further, which would explain why he felt the need to buy a very powerful loudspeaker. Interestingly, this method — ion-acoustic heating — would come back later and was rediscovered by serious physicists. But it certainly didn’t lead to fusion.
Chances are that we’ll never really know what, precisely, was going on inside the Huemul “reactor”. Maybe Richter was just sat in there reading the funnies in the newspaper and occasionally blowing up a tank of hydrogen to keep up appearances. Or maybe he was entranced by the fire of his explosions, and convinced himself that he was on the verge of achieving fusion — and needed to keep Peron onside.
Like all such dodgy fusion experiments, no one was allowed to see the reactor or verify the results for themselves. The result was a great deal of media froth with approximately zero scientific discussion — but a lot of juicy quotes for journalists to seize on.
The whole affair was summarised by Austrian physicist Hans Thirring in one of these barbs.
He was the director of the Institute for Theoretical Physics in Vienna, wrote in a journal that “it is a 50 per cent probability that Perón is giving credit to the ravings of a fantasist; a 40 per cent probability that the president has been the victim of a huge scam; a nine per cent chance that Richter and Peron are attempting to bluff the entire world into thinking they have nuclear fusion… and a 1% chance that Richter is telling the truth.”
Richter, for his part, fought back:
“We are deeply sorry for Herr Thirring, who has revealed himself to be a typical textbook professor with a strong scientific inferiority complex, probably supported by political hatred.”
Anyone who has ever dealt with a crank or a conspiracy theorist will get a sense of where Richter’s head was at in June of 1953 when he gave that statement. All the tropes are there; it’s just a shame no-one used the word “sheeple” in the 1950s.
I will give him some credit: Thirring said that there was a 40% chance that Richter was attempting to scam Peron. I don’t think that was necessarily true; because, if this was a scam, Richter certainly should have tried to flee Argentina with millions of ill-gotten pesos at some point.
I think that Richter was probably deluding himself — in the same way as people who run Ponzi schemes delude themselves. He’s like the physics equivalent of Bernie Madoff — forced to fake the results and make ever more elaborate claims as his whole empire, built on non-existent foundations, come crashing down. Fed by the dream of nuclear fusion, delusions of grandeur, and under immense pressure from Peron to get this thing right, he had wasted millions of dollars on an unscientific pipe dream.
The tragedy of people who manage to make these castles of sand is often that they are really given the opportunity to plumb the depths of desperation before they get put out of their misery. Such was the case with Richter. His skeptics mocked him when they saw him with a bandage on his hand: “Did one of your atomic bombs explode again?”
Eventually, so the story goes, he was brought down by a Navy pilot called Pedro. The man visited to inspect the plant — and noticed that Richter was essentially just blowing up tanks of gas filled with nitrogen and hydrogen, then scribbling the words “nuclear energy” onto scraps of paper. This just confirmed the scepticism that many outside of Peron’s inner circle had about the Richter project — and those who had consulted physicists about the claims were even more sceptical. He convinced Peron, who was increasingly suspicious that Richter was failing to fulfil any of his grand promises, to launch an investigation into the whole project. In September 1952, six desperate months after the press conference where he had announced a nuclear fusion breakthrough, a team of scientists actually came to visit the project.
Oddly enough, the Geiger counters that the physicists brought with them saw no signs of any additional radiation. Yet Richter’s Geiger counters didn’t react when they were exposed to a lump of radium. Curioser and Curioser. The investigation quickly reported back to Peron that there were no nuclear reactions occurring on the island.
However, interestingly, we do have a first-hand account from one Juan G Roederer, who — as a young scientist — was tasked with cleaning up the Huemul project to see what could be salvaged from Richter’s failed experiments. He remembered his experience fifty years later, in 2003.
“He had built a powerful electric arc system in open air, extended across the gap of a huge electromagnet. He would inject lithium and hydrogen, which — surprise! — always exploded with a big bang. An array of Geiger counters nearby monitored the gamma rays from what was supposed to be a fusion reaction, and Richter would declare the counters’ response to be definite proof of success. But once he’d had to relinquish command of his project, it became evident that the counter system responded efficiently to the large electromagnetic fields present whenever the arc was on, whether or not there was lithium and hydrogen.”
Richter’s experiments were a sham. Peron was embarrassed on the world stage: at best, he’d given into delusions of grandeur, and at worst, he’d been swindled by a scientific con artist. By December 1952, the New York Times reported:
“Argentina’s atomic energy project has exploded with the force of a bursting soap bubble, it appeared today. According to engineers who had been engaged on the top secret project, all the 300 workers in Argentina’s atomic energy pilot plant on Huemel Island at San Carlos de Bariloche have been dismissed.”
Richter himself was utterly humiliated — and, after Peron’s regime fell, even briefly jailed over investigations into corruption under Peron. It seems that he lived the rest of his life out in total obscurity, and obviously no reputable lab would ever take him in again. He died in 1991.
Despite the fact that this first ever claim of nuclear fusion was merely the first in a long line of fakes, it had its influence on history. It was stories of the wild claims from Richter and Peron that helped spur more nuclear physicists to think about how a working nuclear fusion reactor might be built — how those temperatures and pressures could be obtained. It’s probably fair to say that the furore around fusion after the Argentine announcement played a part in helping Spitzer and Tuck to get funding for their nuclear fusion reactors. Physicists in the USSR and in the UK had been dabbling with small-scale fusion experiments but struggling to get funding — the attention that came from the lengthy scientific battle in the press over Richter’s experiments almost certainly helped them get that money, which launched the first serious efforts into fusion. And, if the story that Spitzer was prompted to design the Stellarator after hearing news of Richter’s claims is even remotely true, then he played his part in the history of nuclear fusion — however indirect and ignominious it may be.
And, finally, much of the equipment that Richter bought — including what was for decades South America’s only particle accelerator — found its way into reputable scientific institutions and was used for more genuine research.
Yet at the same time, the sad story of Richter and Peron’s “Baloney Bomb” acts as a cautionary tale. This was the first time that the world would be disappointed by claims of nuclear fusion that were exaggerations at best, or outright fraud at worst. It would not be the last time. Amongst all scientific breakthroughs — perhaps driven by our perception that it really would revolutionise the world — fusion seems prone to hype, charlatans, delusions, and disappointment. In 1951, Juan Peron promised the Argentine people that he could bottle the energy of the sun — in his case, literally dispensing the energy in tiny pint-sized bottles to households. Nearly seventy years on, and the dream of a world powered by nuclear fusion is still decades away.
Nuclear Fusion — Kinky and Unstable
Hello, and welcome to Physical Attraction, where we’re continuing our epic series on the history of the quest for nuclear fusion.
A quick recap over what’s happened so far.
It was realised by Rutherford and his gold-foil experiment that nuclei were a thing, and scientists then discovered that nuclear reactions were responsible for transmuting elements into each other. In doing so, they were able to solve all manner of incredible problems. They now had a theory that explained radioactive decay — alpha and beta decay — as changes that took place to constituent elements of the nucleus. They also fulfilled the dream of the alchemists and people like Isaac Newton, who had spent years attempting to unlock the secret of how to turn one element into another. (Unfortunately, turning lead into gold is not especially practical.) It was realised, through developments that lead to the semi-empirical mass formula, that some nuclei were more stable than others — explaining the number, distribution, and charges on the elements that had been discovered — and hence that some could release energy by splitting apart in fission, and others could release energy by joining together in fusion. Rutherford first postulated, and Bethe worked out in detail, that the stars were powered by the nuclear fusion of light elements; and then Bethe and others used that discovery of fission to detonate the first atomic bomb. It was in that Manhattan-project era that the race to make fusion happen on Earth began in earnest.
First, Edward Teller maniacally pursued explosive nuclear fusion to build an ever-more-powerful bomb; then Ulam actually came up with the successful design, which used a fission bomb primary and reflected radiation to compress a capsule of fusion fuel. Then, nuclear physicists began to think about ways that they might be able to harness this power in a non-destructive way. They quickly realised that doing so would require some way of holding the plasma at incredibly high pressures and temperatures — high enough to destroy any physical container. By 1951, many alternative designs had been tentatively proposed. Spitzer’s Stellarator wanted to use a double-eight shaped tube surrounded by coils of wire, that would result in a strong axial magnetic field that could hold the plasma in place. Some scientists were looking into a “magnetic mirror” that might, through a complicated arrangement of magnetic fields, reflect the plasma back and forth inside a tube. And scientists in Britain, who later spread their ideas to America, were looking into the so-called “pinch” phenomenon. Passing large currents through a cylinder of plasma serves to compress it, like crushing an aluminium can with a strong pinch. This could both compress and confine the plasma, if exploited correctly. These ideas more or less existed on paper, and in a few early demonstration models, until Ronald Richter and Juan Peron of Argentina made some wild claims about having achieved nuclear fusion and a limitless source of clean energy. While their “discoveries” turned out to have little scientific merit, they won their place in the history of fusion by getting the idea into the public consciousness — and so helping those few fledgling fusion projects get ahold of funding.
So we resume the story with these three promising designs, and some optimism. The world had gone from not knowing about nuclei to detonating atomic bombs in a few decades. Impossible things had occurred; might it not be reasonable to expect that fusion could, too?
Of course, rivalries began almost immediately around which of the designs would be most successful. In fact, Spitzer — who favoured the donut-shaped Stellarator — actually spent part of his grant money trying to prove that the pinch idea, which in the US was called the Perhapsatron by James Tuck, its main proponent — wouldn’t work so easily. It turns out that they were right.
The important thing to remember about a plasma is that it’s not just a lump of stuff that reacts to externally imposed magnetic fields. A plasma is made of ions and electrons — it’s made of charged particles. The way the plasma interacts with the magnetic field you use to confine it is important, sure — but the way it interacts with itself is even more important. It gives rise to an entire field of study, known as magneto-hydrodynamics, where you have to combine the equations that tell us how fluids flow with the equations that tell us how electric and magnetic fields are generated. Plasmas can carry waves due to these charged particles oscillating. The oscillations of one charged particle pull on another, and cause it to move in turn. Bear in mind that, even though we know these equations, they can still have behaviour that makes the weather inherently impossible to predict beyond a certain range.
What’s more, at this point in history, there were virtually no experimental results to look at. Plasma does not exist naturally on Earth under normal conditions; the closest we get is lightning, which is a kind of partially ionized plasma. There’s plenty up in the interstellar medium — although far too cool and sparse for fusion — and there’s also plenty in the heart of stars, but that’s rather hard to examine. No one had ever built a container for plasma; no one had ever heated it to near-fusion temperatures: this was totally unexplored territory, except on paper — and even then, without experiment, there was no way to confirm current plasma theory.
Magneto-hydrodynamics is as complicated as the name suggests, and can give rise to all kinds of incredibly fascinating behaviour. Sadly, not all of this behaviour is particularly conducive to getting plasma to behave the way you want it to, and getting nuclear fusion to work.
So what was the problem with the pinchy Perhapsatron? As usual, problems arose when things started getting too kinky.
The idea is that you have a perfect cylinder of plasma, and that — by passing a current through it — you get a nice radial compression that can contain and heat the plasma so that it fuses. The only problem is that this is an inherently unstable system. Like a ball sat on top of a pointy hill, it’s mathematically stable for one point only — but the slightest instability grows, just as the ball will rapidly roll downhill. It turns out that if you have a tiny kink in that perfect cylinder, the magnetic fields generated by the pinching current tend to expand the kink. What you essentially have is a kind of density gradient up — there’s more magnetic field in one area than another, and some charged particles are “pushed” more than others. They respond by moving away, and if they move in a way that only increases the magnetic field gradient, the instability will expand. Naturally, this effect gets bigger as the kink gets more pronounced, so you have a runaway, accelerating instability — the plasma tube will rapidly go from being straight to wobbly and then finally break apart altogether. The whole process, from the initial formation of the kink to the plasma hitting the sides of your container, takes on the order of a microsecond. If you even get fusion going, it won’t be for anywhere near long enough to generate useful power. You just have long enough to snap a few high-speed photos of the plasma wriggling and writhing like a snake before the instability destroys your nice tube of plasma. This was what was found when the Perhapsatron was first constructed by Tuck — with help from others, like our old friend Ulam — in 1953.
And in many ways this sets the story up for so much of fusion endeavour to come. Someone will come up with a design that solves a problem; it seems like it should work. But then, when theoretical calculations are done in detail — or when the device is actually build — some instability that hadn’t been considered comes to light, and renders the idea impractical. Then you have to either try to compensate for that instability with a more complicated design, or abandon the whole idea and think of some other way of getting towards fusion.
A great deal of the history of nuclear fusion efforts can be described as very clever theorists and experimentalists having their efforts foiled by the intricate complexities of how plasma can behave in electromagnetic fields.
Just a year later, in 1954, our old friend Dr Strangelove — I mean, sorry, Edward Teller — pointed out another kind of instability, that would plague early types of stellarators. In fact, this instability was discovered by Karl Schwarzschild — the grandson of the man who had solved Einstein’s equations of general relativity and first, mathematically, discovered black holes — and Martin Kruskal, who made immense contributions to many fields in mathematics and physics. As scientists and theorists began to look into how plasmas might behave under a range of different conditions in more details — to solve this practical problem of getting fusion to work — they were beginning to discover the immense complexity of the field, and the ways things could go wrong when you try to get plasma to behave. This particular instability is sometimes called the interchange instability. Recall that the stellarator uses magnetic fields generated by coils of wire wrapped around a figure-eight tube. Inside the tube, you have magnetic fields due to the coils of wire, but also running through the plasma itself.
It turns out that sometimes, when a magnetic field has a particular curvature, it’s preferable for the plasma to swap places with the empty space containing the magnetic field. This actually doesn’t result in a change in shape for the magnetic field as a whole — which means that you don’t change the energy in the system associated with the magnetic field. But instead, you change other forms of energy. For example, gravitational potential energy. If the magnetic field doesn’t care about the plasma drifting down due to gravity, then it’s energetically favourable for the plasma to do just that. But this then means that the plasma isn’t uniform anymore, and so you have a perturbation that grows, and eventually your plasma becomes unstable and can’t be contained by the magnetic field.
Teller’s analogy for this was saying that the magnetic field holding the plasma was a little bit like rubber bands trying to hold jelly together: as they squeeze the jelly, “they try to snap inwards and let the plasma leak between them.”
It actually turns out that the interchange stability hadn’t even been noticed yet by the scientists working on early stellarators — they had other, far bigger instabilities to deal with. This, of course, is the issue: if you solve the instability that destroys your plasma in one microsecond, it might just allow you to watch some new instability that takes ten microseconds to tear the plasma apart. At this early stage of fusion development, “plasma diagnostics” — the ability to actually measure the properties of confined plasmas — were still extremely rudimentary. Early stellarators were suffering from slightly more mundane problems. The plasma couldn’t be heated to high enough temperatures. The magnets tended to move around when they were operating at their maximum capacity, destroying any hope of stability. Impurities in the plasma resulted in it emitting vast amounts of X-ray radiation, which caused it to rapidly cool down before it could hope to reach fusion temperatures.
The solution, in each case and for every kind of researcher, was to build a slightly larger device — or one with some additional complexities to the magnetic field, designed to smooth out the instabilities that had been discovered. Between 1953 and 1957, the model A stellarator, the model B stellarator, the B-1, the B-2, the B-64, the B-65, and the B-66 were all successive models of the stellarator that were built. Each one was larger and usually more complex than the previous one.
Each one demonstrated some incremental improvement or other, in the confinement time, or the heating of the plasma, or its behaviour. There are all kinds of different problems and parameters to tweak with the stellarator that complicated the simple design that Spitzer had come up with on his ski-lift. You’ll remember that this design was a figure eight because of the problem of drift in a normal doughnut-shaped fusion reactor. The wires that wrap around the donut are bunched up on the inside compared to the outside, and so the field is stronger towards the centre of the donut, which causes charged particles to drift towards the edges. The ions and the electrons would drift apart, and strong field gradients would tear the plasma apart. The figure eight is supposed to cancel this drift, because the particles are travelling clockwise and anticlockwise for different sections of the track. But it’s impossible to have the figure eight precisely crossing itself — otherwise you’ll just end up with two columns of plasma smashing into each other — so there was some residual drift. Designs experimented with trying to get the configuration closer to a figure eight — for example, flattening out the tube into a “race-track” configuration.
Unfortunately, the magnetic field is not the only source of drift for the particles to crash into the walls. You also need to worry about diffusion — the natural tendency for the particles to spread outwards through collisions and interactions with each other. Previous calculations had suggested that this would be a tiny effect compared to the natural drift due to the magnetic field gradient. But theorists — notably Bohm — pointed out that diffusion in plasmas is more complicated. Classically, we think of diffusion in terms of particles bumping into each other, colliding, and interacting. Imagine having a gas of atoms that’s doing just that; over time, the gas will gradually spread out. Collisions in dense regions of gas will be more than in sparse regions, which allows the atoms towards the edges to wander outwards more freely than those at the centre. The result is something like a “random walk” — if you take a step in a random direction, then turn around and take another step in a random direction, and so on — eventually, you drift away from where you started.
But plasmas are different. Their particles don’t just move according to their velocity distribution, like free atoms in a gas, but under the influence of magnetic and electrical fields. The result is that diffusion in plasmas wasn’t well understood — and, if it turned out to increase the more you tried to confine the plasma, for example, then it could be a dealbreaker.
There are still other problems. We’ve talked about how the Lorentz force on a charged particle — the force it feels due to magnetic fields — is q ( v cross B) , which means it’s perpendicular to its velocity, and the magnetic field, and proportional to both. But this means that if you have plasma ions and electrons with substantially different velocities, they’ll feel a different force. They’ll also be travelling at different rates, and so they’ll be accelerated at different rates around the figure-eight. Particles that are travelling too quickly might smash into the walls of the stellarator. Those that are travelling too slowly might not be confined at all, with the weak magnetic forces that act on them, and drift on a lazy, large orbit into the walls of the stellarator. The result of this is that you enhance instabilities, and you lose plasma — it’s not good for confinement, and it’s not good for fusion. Spitzer and others tried to fix this using a “diverter” — essentially, a region that would select the particles with the appropriate velocities and send the rest away. But if you’re losing too much plasma, you’re not going to get ideal fusion conditions — if you’re losing the fastest ions and electrons, you’ll lose some of the energy that’s generated by any fusion.
So the early 1950s in general turned out to be an era when early experiments and theoretical calculations demonstrated that this magnetically-confined fusion business was going to be far more complicated than anyone thought. The complex interactions between magnetic and electric fields, plasma charged particles colliding with each other and drifting due to their insanely high temperatures and pressures, and the fluid dynamics and turbulence of moving plasma — all of this gives rise to a rich array of incredibly complex, beautiful, and fascinating behaviour — but also painful instabilities that ruined early fusion reactors. The $50,000 budgets for initial research projects were ballooning into tens of millions of dollars. The ambition grew ever wilder — but results would have to wait for the next generation of machines.
Ultimately, though, these early devices struggled. Not only was confining the plasma an incredible challenge all by itself, but attaining the necessary heat and pressure for you to even begin dreaming of fusion was not easy. For example, the B1 machine was only heating plasma to around 100,000K — compared to the 50 million K that Fermi had calculated would be necessary for a self-sustaining fusion reaction in deuterium-tritium fuel. Confining the plasma for a few milliseconds was hard enough, but it wasn’t even at fusion temperatures yet.
The stellerator — at least in the US — was thought to be the most promising concept after early calculations and experiments had shown the kink instability in pinch devices. The UK was still focusing on pinch devices, of which more later. Yet, in 1955, scientists working on the pinch project at Los Alamos in the US had something to show for their efforts.
A key point to understand here is that nuclear fusion can occur between plenty of different light nuclei and types of fuels. We know that all kinds of fusion processes can take place in heavy stars, but there, the energy requirement is not as limiting as it is in our experimental fusion reactors. The nuclear fusion that everyone is trying to achieve at this stage is deuterium-tritium fusion. Deuterium is a proton and a neutron, and tritium is a proton with two neutrons, so they’re both heavy forms of hydrogen. You might intuitively expect that this will be a fairly good target reaction for fusion — because, after all, what makes fusion so difficult? The answer is that protons repel each other. So getting some light nuclei with a nice high number of neutrons compared to protons should mean that you have a decent shot at getting fusion at fairly low temperatures; the strong force will “take over” at a slightly further distance, and fusion is slightly more likely. And, indeed, this is the case: deuterium-tritium fusion reactions are still favoured by experimental reactors today, with a large “cross-section” (likelihood to interact) and a lower temperature than other reactions.
The reaction between deuterium and tritium is fairly simple. A proton and a neutron meet a proton and two neutrons. The result is helium-4, a nice stable isotope of helium that we all know and love — and a single, leftover neutron. The energy that’s released due to the rearrangement of nuclei in one of these reactions, which is around 17.6MeV (mega electron volts) mostly ends up as kinetic energy in that neutron. So, if you’re really getting nuclear fusion to work, the sign will be lots of very fast neutrons — which aren’t deflected at all by the magnetic fields in the bottle, so tend to just whizz out of the reactor — emerging from the plasma and smashing into your detectors. Neutrons, as a sign that fusion reactions are taking place, are very important beasts. This is why, in early 1955, when scientists at Los Alamos built yet another large pinch machine — and, switching it on, they found neutrons emerging — it caused a great deal of excitement. It looked like they had made that crucial first step — getting nuclear fusion reactions going on Earth in steady conditions. It seemed like a quiet breakthrough. The machine — the Columbus I, named after another famed explorer — may have just stumbled upon a way of getting fusion to work on Earth. And, a few years later, the results were repeated in Britain by a much smaller experiment.
And we’ll pick up the story there next episode.
Nuclear Fusion — A Sun of Our Own!
Reporting of science in the newspapers is very tricky to get right. There are many reasons. One is that we have a narrative of science as taking place in dramatic, revolutionary breakthroughs — and this is what sells papers. The incremental, limited nature with which real science often advances — small progress on previous results here, an interesting experimental anomaly which needs to be peer-reviewed and potentially corrected there — is not how we often think about science, and it’s certainly not what sells newspapers. At the same time, scientists — who have to work on very specific problems — want to explain why their research is relevant and important. So you’ll see a hundred press releases a year in the scientific literature, with a new breakthrough in renewable energy, or carbon dioxide removal, or quantum computing, or some such sexy topic. And then there is always the temptation to sensationalise this discovery into the big one, the one that will change the world — and mention that the results are preliminary in a footnote, if at all. Occasionally, a dishonest journalist will take a single result from a paper completely out of context — a colleague of mine here probably remembers with irritation that, in a paper where the climate sensitivity to doubling carbon dioxide was estimated, the headline was “11 degrees C” when this was the very upper limit that couldn’t technically be ruled out even though it was considered highly unlikely.
I imagine for the scientists involved there are very mixed feelings. On the one hand, it must be nice to see enthusiasm and excitement about your work. On the other hand, there’s suddenly an awful lot of pressure on you to be correct — and to live up to the hype that wasn’t entirely whipped up by you. I imagine these are the mixed feelings that the scientists at Britain’s Harwell laboratory, working on the ZETA experiment, felt if they happened to glimpse a copy of the tabloid newspaper, “The Daily Sketch”, on January 25, 1958.
“A SUN OF OUR OWN — and it’s made in Britain!” trumpeted the newspaper on its front page; OUR SCIENTISTS SPUTNIK THE RUSSIANS — FOR PEACE!
And the newsreels were enthusiastic too:
[CLIP FROM FIRST 1MIN OF HARWELL ZETA]
The press attention surrounding the ZETA experiment was, in fact, so intense that even the wife of one of the chief scientists was photographed for a pull-out spread in a newspaper — and given the title “Mrs ZETA.” And when you know that millions of people around the nation are reading over their morning toast: “The mighty Zeta provides limitless fuel for millions of years”, you’re probably going to feel a little bit like this whole thing is spiralling slightly out of control.
When Juan Peron and Ronald Richter had caused the original fusion flurry, it had been quashed pretty quickly. No one had heard of these scientists; very few people thought that Argentina, of all places, had been the first to crack fusion. It wasn’t even widely agreed that fusion would be possible on Earth. But ZETA, and Harwell, were different. They were reputable. In fact, as you’ll remember from previous episodes, the whole idea of pinch nuclear fusion had come from British scientists at Oxford — when James Tuck went over to the US and got funding for his Perhapsatron, and the pinch nuclear fusion research programme that produced those all-important neutrons at Los Alamos, he’d got the idea from British scientists. Britain was the first country in the world to have an atomic weapons project, although it was very quickly dwarfed and outpaced by the Manhattan Project. It became clear after 1945 and the start of the Cold War that concerns about spies leaking classified nuclear secrets would mean that the US would not share their nuclear technology with Britain — and this put Britain at a distinct disadvantage. After one particularly demeaning phone call, the Foreign Secretary blustered into a meeting about developing the nuclear bomb, and said: “We’ve got to have it, and it’s got to have the bloody Union Jack on it.” A few years later, sure enough, Britain’s nuclear scientists had more-or-less independently developed nuclear weapons.
On the day that headline was published, scientists and journalists were crammed into the aircraft hangar at the nuclear research laboratory. For several months, rumours and early experimental results had been leaking out of Harwell to the press suggesting that they were on the verge of announcing a big breakthrough into fusion. At that press conference, the director of research, Sir John Cockcroft, described the process of nuclear fusion, and said: “Using ZETA, we have achieved temperatures of 5 million degrees for a few thousandths of a second.”. And chief scientist, Sir George Thompson, commented — yes, say it with me now — “Within twenty years, it should be possible to have viable nuclear fusion reactors.”
Where did this come from? As we discussed, the British fusion efforts were focused on the “pinch” concept. Take plasma in a torus, and then jam a huge current through it. That current will cause a magnetic force that pinches the plasma inwards, compressing it — hopefully — to fusion temperatures and densities. In the early 1950s, several prototype machines were built both in the US and the UK. But they quickly ran into instabilities — the dreaded “kink” and “sausage” instabilities, where tiny defects on the surface of the plasma would quickly spiral out of control, with the plasma bending, twisting, writhing like a snake and smashing into the walls of the torus. The issue was density gradients. When the pinch current was applied, any area of the gas that had a slightly higher density would create a slightly stronger magnetic field and collapse faster than the surrounding gas. This caused the localised area to have higher density, which created an even stronger pinch, and a runaway reaction would follow. The quick collapse in a single area would cause the whole column to break up.
The physics teams came up with various solutions to these instabilities. One idea was to increase the rate of compression — perhaps by passing a much larger current through the plasma, much more quickly. This way, the compression would happen so fast that the plasma wouldn’t have time to respond — the density gradients wouldn’t come into play, and the whole plasma would be compressed by a sudden shock wave. This was sometimes called the “fast pinch” idea. Other concepts were more ingenious — for example, wrapping the entire inside of the vaccum tube in a thin metal sheet.
In electromagnetism, there’s a famous phenomenon called Lenz’s Law. It relates to electromagnetic induction. When a changing magnetic field is applied to a conductor, it induces a current. This is the principle behind basically every nuclear power station; you change the magnetic field through some wire by spinning it around in magnetic field lines, or by spinning the magnet. This results in a changing magnetic field in the wire, and this in turn causes a current to flow.
But the current itself has its own magnetic field — every electrical current generates a magnetic field around it. Lenz’s law tells us that this magnetic field has to oppose the changing one that’s inducing the current. I think the easiest way to understand this is via conservation of energy. Imagine that you could move a magnet towards a coil of wire. The magnetic field in the wire changes as you do this, which causes a current to flow through the wire. If that current then produces a magnetic field that drags your magnet towards the wire, you’re in trouble with conservation of energy! You could imagine moving a magnet a little way towards a wire, which would then generate a magnetic field that dragged the magnet in — it would accelerate with no apparent source of energy. This would be amazing for power plants — they’d just have to set the magnet spinning and the current would cause it to accelerate and spin forever, without having to burn any nasty fossil fuels or design any complicated fusion reactors. Instead, to produce the energy in the current that’s induced in the wire, you need to act against a force. And that force is the tendency of the induced magnetic field to repel the magnet.
Physicists working on the fusion reactor thought they might be able to exploit Lenz’s law. Imagine the plasma as our magnet. As it moves towards the thin metal sheet, it induces a current in that metal sheet — which opposes the motion of the plasma. So you can perhaps hope to stabilise a plasma this way — using Lenz’s law in a conductor to push the plasma back. Sadly, this approach seemed to work better for cancelling out large-scale movements — say, the entire plasma column drifting — and did little to calm these instabilities where a small region of plasma bursts out and destabilises the column.
There was also a “stabilised pinch” concept, where additional magnets wrapped around the torus would hopefully act to damp and cancel out any instabilities that occurred when the pinch was applied. https://www.bbc.co.uk/programmes/b008nzws
ZETA was an ambitious attempt to go one step further: ZETA stands for Zero Energy Thermonuclear Assembly; the Zero Energy refers to breakeven, so the hope was that the nuclear fusion reactions would produce as much energy as was put into the machine; a small jump away from a power plant. It was the largest machine that the British had yet constructed — although it cost a relatively small amount, $1m US dollars. Already the Model C stellarator in the US cost twenty times that. And it incorporated both types of stabilisation — the metal padding on the inside of the torus that was going to exploit Lenz’s law to push the plasma back into place, and the additional magnets wrapped around the torus that would stabilise the pinch — along with an induction magnet that would allow them to jam 100,000 amps through the plasma when they switched it on.
The initial results were very exciting — which was probably why rumblings got out to the press. The initial stability problems with the plasmas were much improved; the plasma lasted for milliseconds rather than microseconds. It might not sound impressive, but improving things by a factor of a thousand is impressive. I wish that would happen to the listeners on this show, or maybe my bank balance.
But most exciting of all were the neutrons. Every time the machine was switched on to blast a pulse of plasma around, there was a burst of around a million neutrons. Naturally, this looked exactly like nuclear fusion. At the same time, the physicists knew that there were perhaps other ways that neutrons could be produced in the intense conditions inside the reactor, that might not be proof positive of thermonuclear fusion. Measuring the temperature of the plasma would prove critical. If the temperature was high enough for fusion, and a burst of neutrons was also being emitted, then fusion seemed to be the most likely culprit. If, however, the plasma wasn’t being heated sufficiently, then the neutrons couldn’t possibly come from fusion.
Directly measuring the temperature of the plasma proved to be extremely difficult. Although ZETA actually had windows that allowed you to look at the plasma around the outside (!), you couldn’t very well stick a thermometer that goes up to 5 million degrees kelvin in it. Instead, they had to measure the temperature indirectly, using the light emitted by the plasma. The idea here is that the Doppler effect — the stretching and squishing of the wavelength of light as the sources that emit it move around, in the same way as sound waves are stretched and squished when an ambulance goes NYEEEEEEOWMMMMMMMMM past you — would allow them to infer the temperature of the plasma. This technique suggested that there was a 1–5 million Kelvin temperature in the plasma — hotter than anything that had been achieved before.
Thus the leaks to the press and the excitement arose. In the months leading up to that press conference and the famous headlines at the start, the British press published an average of two articles a week about the ZETA experiment — and the Americans got involved, sending scientists to inspect the ZETA machine. Here, the US-UK rivalry grew bitter, with some pundits suggesting that the US were delaying the release of results from the ZETA experiment because they couldn’t yet reproduce them and didn’t want to admit that the British were scientifically ahead of them. The US certainly acted to delay the publication of results from Harwell, and there was a great deal of bitterness when they were eventually published in Nature in 1958 alongside results from American experiments.
Lyman Spitzer — remember him, he came up with a stellarator idea on a ski-lift while taking a holiday from making nuclear weapons — was one of the American fusion scientists who inspected the ZETA. While initially he thought they had cracked fusion reactions, he soon had doubts. According to his calculations, the pulses of current simply weren’t firing for long enough to truly heat the plasma to five million degrees — and if the plasma wasn’t at fusion temperatures, the neutrons couldn’t be fusion.
Nevertheless, at that famous press conference at Harwell when the results were announced, the director was quizzed about the neutrons. How sure was he that they had achieved nuclear fusion? The Harwell scientists had more or less agreed to say “These are preliminary results, further testing is needed before we can be sure” — but Cockcroft went on the record here as saying he was “90% certain” that thermonuclear fusion was happening inside ZETA — and this, as much as anything else, fuelled the press hype around the experiment. Cockcroft, after all, had won the Nobel prize for nuclear physics research just seven years ago. He had worked with Rutherford, the man who had discovered the nucleus. His Nobel-winning experiment was the first to artificially split an atom. He was no Ronald Richter, and as one of the most outstanding and respected British scientists, everyone had reason to believe that ZETA had cracked fusion.
Soon enough, there was talk of building a ZETA II — which would aim to reach 100 million Kelvin, and generate net power. Papers talked about “Unlimited power from sea water”; no more smog, no more coal, no more oil. As part of the hype surrounding the device, universities around the world — including in Osaka in Japan, and in the United States — began announcing their own versions of ZETA. Some, like the American devices Colombus II and the Perhapsatron III, were generating their own neutrons. Even the Government of the Soviet Union congratulated the British on their techniques and expressed their “admiration” — with a slight attempt to steal a little glory by pointing out that Sakharov had come up with similar ideas.
But not everyone was convinced. Spitzer was objecting on the grounds that the temperatures reached couldn’t possibly be high enough. And Basil Rose, who worked at Harwell, was not convinced that the neutrons were really from fusion. Fortunately, he was also the guy who was basically in charge of experiments with subatomic particles at Harwell. He was in charge of their synchrotron — a 1950s particle accelerator like a smaller version of the LHC today.
Back then, subatomic particle detection was a little less accurate but a lot more fun. They used a cloud chamber, where you take advantage of supersaturated air. When particles like neutrons pass through a cloud chamber, they bash into atoms and ionise them — and those ions have an electrical charge, which attracts surrounding water droplets, causing a little cloudy trail to follow behind the path of the particle. By studying the tracks, like little contrails behind aeroplanes, you can figure out what your subatomic particle is, how quickly it’s moving, and so on. I just found out that you can actually build these in your living room, and now I long for the day when I actually have a living room so that I can make one. Maybe that would be a good bonus episode.
Basil Rose suspected that the neutrons weren’t being produced by thermonuclear fusion — and clearly measuring their properties would help to test this hypothesis. And this was where ZETA’s triumph began to unravel.
First, Basil noticed that the neutrons were very highly directional. Yet in the hot, colliding plasma, you’d expect neutrons produced by thermonuclear fusion to spread out in all directions. But the real killer was in an ingenious experiment. Run the current backwards through ZETA, with the current running in the opposite direction. If what you’re really doing is compressing the plasma and producing neutrons through nuclear fusion reactions, the apparatus should be insensitive to the direction that the current runs through the tube. But doing this totally changed the number of neutrons and their energy. Neutrons that were in the direction of the current had far more energy than those that were in the opposite direction to the current. Fusion reactions couldn’t explain that. And similar experiments at the Perhapsatron and the Colombus experiments showed that their neutrons also changed when you reversed the current.
It turns out that the neutrons were a result of another dreaded plasma instability. When little pinches, twists, or kinks appear in the plasma, we talked about how those density gradients can very quickly get very big. The electrical and magnetic fields grow and grow in the region of the instability, and this can result in very small regions with extremely high electrical fields. Those electrical fields in turn accelerate nuclei in the direction of the pinch current, and they smash into the rest of the (much colder) plasma, or the walls of the detector. This can knock neutrons out of the nuclei, or out of the detector walls, via a process called neutron spallation. You can even get lucky and see a small number of fusion reactions where the hot nuclei successfully smash into a colder one and fuse.
But this is not at all how the machine was supposed to work. The incredibly hot nuclei that actually reached fusion temperatures were the product of an instability — and so you couldn’t hope to sustain their production; they were a brief burst as the instability killed the plasma dead. The ZETA machine was supposed to uniformly heat the plasma so that hot nuclei would fuse with each other. It was not supposed to create small numbers of ultra-hot nuclei that would bash into things and produce neutrons. These reactions were not going to produce energy, and ZETA was inherently unstable. Rather than a sustained burn, the plasma would flicker and snuff itself out.
The result was deeply embarrassing for those that had worked on ZETA. Cockroft himself tried to save face in publishing a retraction that said “It is doing exactly the job that we expected it would and functioning exactly the way it hoped.” But the damage had been done. ZETA did represent a step forwards. You can talk to scientists involved in it now — as the BBC did for their excellent documentary, Britain’s Sputnik — and you’ll hear them, perhaps a little defensively, point out that it allowed us to advance our understanding of plasma physics at high temperatures.
The real irony was that a few months later, another, similar device — called Scylla — managed to achieve thermonuclear fusion using the pinch effect. This time, they really got up to temperatures of many millions of Kelvin, and they genuinely did produce neutrons from thermonuclear fusion. But the scientists were very cautious about announcing their results. There was no fanfare. And, by the time in 1960 that they finally said they were willing to stake their reputations on having achieved thermonuclear reactions, no one seemed to care all that much. The world’s first controlled thermonuclear fusion experiment took place with the Scylla pinch device in 1958, at Los Alamos, following on from work by ZETA and the other pinch fusion pioneers. But these days, very few people — myself included, until researching this episode — have heard of Scylla.
There were, of course, geopolitical reasons that the ZETA hype got out of control. For a start, you have to remember where Britain was in 1958. From a vast Empire that ran a quarter of the world, they had been reduced by the First and Second World Wars to a second-rate power. The Suez crisis in 1956 had been a national embarrassment that had demonstrated that the British were subordinate to the Americans. It was clear that the Empire was in decline, and the global power dynamics were going to be about a competition between the US and its allies, and the Soviet Bloc. But Britain could still be a major player on the world stage scientifically. “Sputnik” was in that headline for a reason: just the year before, the Soviets had succeeded in launching the Sputnik satellite into Earth’s orbit.
So the “breakthroughs” at ZETA gave the British politicians and press something to trumpet at a time when they felt it was needed.
Concern over the leaking of nuclear secrets meant that the international fusion efforts were not collaborating with each other. In fact, Klaus Fuchs — who loyal listeners will remember we discussed as the Soviet spy who leaked nuclear secrets to the USSR, way back when I interviewed Simon Ings about Stalin and the Scientists — was head of theoretical physics at Harwell, where he was particularly sceptical of nuclear fusion as a power source. When it was discovered in 1950 that he had passed secrets to the USSR, fusion research became highly classified. This meant that no one’s fusion research programme was transparent; it was impossible to tell if someone else had made the breakthrough first. Nowadays, most fusion research is done fairly out in the open, and arguably the furthest-along projects are huge collaborative efforts. But the secrecy spurred competition, as Joan Lisa Bromberg describes: “Secrecy prevented [the scientists of Project Sherwood] from forming a realistic picture of the state of the art for fusion science and technology on a global scale: for all they knew, the Soviets were already mass-producing desk-sized reactors!” Yet it also set the fusion project back by years if not decades.
Tuck and the pinch team at Los Alamos ended up finding more or less the exact same problem that ZETA had. If they had communicated with each other, perhaps ZETA would have done some more tests before holding a press conference, and the result would have been far less disillusionment about the nature of nuclear fusion.
Those are all counterfactuals for another day, though. After the embarrassment and apparent failure of ZETA, all that hype came crashing down. Fusion scientists realised that a sustained reaction was not simply another few years and a slightly bigger machine away — that this was going to be a very complicated, stop-start endeavour. Progress was being made. Confinement times were being increased. But every step forward seemed to run into new false hope, a new instability, another case of inflated expectations followed by disappointment. The dream of unlimited power from fusion was slowly slipping away. Funding was starting to dry up — there’s only so many times you can hear that the next big machine, or the next one, will be the one that successfully delivers on the inflated expectations. And, in many ways, the whole field of physics — particularly nuclear physics — was falling from this peak of inflated expectations. In 1945, there was every reason to believe that physicists were on the verge of fully understanding the world, taming the forces of nature, and changing human society into a kind of sci-fi utopia with limitless energy to perform all of our desires. But as the decades wore on, the promises seemed more and more unlikely to be fulfilled. Just as the shadow of the nuclear bomb hung over the world after the Trinity test, so the shadow of experiments like ZETA — perhaps unfairly considered failures — would hang over the nuclear fusion community. This was not going to be easy.
And yet, even as the fusion scientists were licking their wounds, there were two new developments taking place that would give rise to totally different kinds of devices. Devices that would revolutionise the field of fusion. One would create a new sort of fusion device that could achieve vastly improved performance, and is still the leading candidate to this day for nuclear fusion. And the other would create an entirely new way of attempting to achieve fusion.
It’s there that we’ll pick the story up next episode.
The archive radio footage you heard was from “Britain’s Sputnik”, a 2008 radio show about the Harwell Zeta experiment.
p92 of Sun In A Bottle
================== Next episode: the Tokamak revolution! =========================
Tokamaks attained the temperatures that earlier fusion reactors couldn’t. Note the difficulties in attaining high temperatures experienced by e.g. Ohmic heating in the stellarator:
Early stellarator designs used a system similar to those in the pinch devices to provide the initial heating to bring the gas to plasma temperatures. This consisted of a single set of windings from a transformer, with the plasma itself forming the secondary set. When energized with a pulse of current, the particles in the region are rapidly energized and begin to move. This brings additional gas into the region, quickly ionizing the entire mass of gas. This concept was referred to as ohmic heating because it relied on the resistance of the gas to create heat, in a fashion not unlike a conventional resistance heater. As the temperature of the gas increases, the conductivity of the plasma improves. This makes the ohmic heating process less and less effective, and this system is limited to temperatures of about 1 million kelvins.
Talk about Bohm diffusion in Stellarators suggesting that MCF would be totally impractical
then the discovery of what the Russians were doing with tokamaks, superior confinement, etc.
============== then — INERTIAL CONFINEMENT FUSION =====================
and so on…
Nuclear Fusion: Doldrums and Tokamaks
Hello and welcome back to Physical Attraction’s series of episodes discussing the history, physics, and future of nuclear fusion.
So far, we’ve discussed the history of fusion from the discovery of nuclei right up until the end of the 1950s. For fusion researchers in the West, this was a time of some disillusionment. The incredible progress in our understanding of nuclear physics — and our ability to harness the destructive power of these newly-discovered fundamental forces — had lead to hope that it might be possible to develop a workable fusion reactor.
Writing this show, and reflecting on the world, I’m always keenly aware of how unique this era of human history is. For thousands of years, the lives of proto-humans didn’t change all that much: you could be in 30,000BC or 20,000BC and notice only minor differences between those eras. In just the last fifty years, the world’s population has more than doubled — and there have been incredible technological changes.
James Gleick, in the book Time Travel, points out that this rate of technological change has given rise to a broad and popular conception of “the future” that arose alongside the early science fiction of people like HG Wells. Not just a future that was limited to next year’s harvest, or personally growing old: not just a future where the names and families of the monarchs, and the borders of their dominions might change, but — a future where technological advances and social changes have led to a substantially different world.
Perhaps we are all aware of living in this special era of history — this era of unbelievably rapid change in so many different areas — and our conception of “the future” is the more permanent, steady equilibrium. Change this rapid and vast in scale feels like it must have some ultimate destination; like it cannot continue forever.
Today, we have debates — and we’ve had them on this show — about whether artificial intelligence will result in a substantially different world over the next few decades. There are wild utopian and dystopian visions, where the world becomes an AI-enabled paradise or else destroyed by artificial intelligence. I make this comparison not because the technologies are similar — they aren’t — but the attitudes we have towards them are. Nuclear technology was viewed in the same way, and as the same panacea, in the 1940s and 50s.
The visions for what a workable fusion reactor would lead to were truly utopian — and perhaps they sound a little familiar to anyone who’s read Ray Kurzweil or other futurists write about AI:
“Our generation lives between Hell and Utopia.
For the very force that can destroy the human race can create wonders without end on earth. It is small wonder that men’s minds today shuttle between fears of doom and dreams of unprecedented bounty.
Here are miracles within our reach — in medicine and science, production and power — possibilities so immense, so magical that we can create a life on earth more golden than man ever before dreamed possible. Here is the Utopian Promise of the Peacetime Atom.”
David Woodbury, Atoms for Peace, 1955.
When Ronald Richter claimed to have developed fusion in Argentina, it was met with scepticism, but inspired scientists in the West to take up the quest. Soon enough, with Z-pinch devices and stellarators, they felt they had several feasible designs that might one day realise nuclear fusion. But by the end of the 1950s, after some high-profile and embarrassing failures such as the ZETA project at Harwell, the authorities in charge of allocating funding became ever more sceptical that nuclear physicists could deliver on these utopian projects under-budget and on time. On the physical side, each experiment — including those that were overhyped by the media as having finally “cracked” fusion — did show incremental improvements. The plasma was confined for longer, or it attained a higher temperature, or some or other instability was removed by the application of a new magnetic field. But none of them were able to approach what they were intended to do: ignite a self-sustaining fusion reaction that could produce more energy than it required to run. The neutrons that were produced by the pinch experiments of ZETA, and in the US Scylla and other similar devices, were just from individual particles that were rapidly accelerated by electric and magnetic fields as the plasma confinement fell apart. Qualitatively, the process that produced them was basically the same as Rutherford’s early bombardment experiments in 1934: it was no route to fusion power.
The ironic thing about the ZETA “failure” was that the embarrassment was caused due to these neutrons not coming from a genuine fusion reaction — but mere months later, a slightly modified machine did get neutrons from a genuine fusion reaction. ZETA was a z-pinch machine — it generated its pinching, and hence the heat and pressure required for nuclear fusion, by driving a current down the spine of a tube of plasma and relying on the resultant Lorentz force to compress the plasma. The Scylla experiment in the US ran the current around the circumference of the plasma instead, a technique called theta-pinch; a strong magnetic field then runs along the axis of the cylinder, and a compressing radial force results. These theta-pinch machines were more resistant to instabilities.
Remember Teller’s vision of plasma instabilities as like a series of rubber bands trying to confine jelly, which squeezes out through the gaps? This alleviates that problem slightly due to something called Aflven’s Theorem. When we imagine lines of the magnetic field — or lines of magnetic flux, as they’re called — Aflven noticed that they tended to be “frozen in” to a plasma or fluid that contained them. In other words, the plasma likes to rearrange itself to preserve any magnetic field that’s inside it. When you use a theta-pinch device, the magnetic field lines are right down the spine of the tube of plasma — “frozen in”; and, consequently, it’s better for certain kinds of instability.
This allowed for theta-pinch devices to heat deuterium to ten million degrees Kelvin, and produce all of the expected products of thermonuclear fusion. In 1958, they had effectively succeeded where ZETA had failed. But it didn’t seem to matter.
Even though genuine neutrons from thermonuclear fusion were being produced, the theory had moved on: everyone was now thinking about Lawson’s criterion, whereby you need the product of density and confinement time to be high enough to produce power. The pinch experiment didn’t seem capable of confining the plasma for very long before instabilities tore it apart. These instabilities were an immense headache — they weren’t entirely understood theoretically, and they were very unpredictable experimentally. To give you an idea of how complicated plasma instabilities can be, I should of course point out that there is still substantial research going on into plasma instabilities sixty years later — with all the time and effort that’s gone into plasma physics and fusion research, there are still unsolved and perhaps unsolvable problems associated with turbulence.
One of the fundamental issues here was that, at the start of fusion research, there was a misconception about how plasmas could be treated. Remember when we described the stellarator and pinch concepts — as charged particles being bent around on orbits, or forced inwards, according to simple applications of Maxwell’s Laws and the Lorentz force law which tells us how electromagnetic forces affect charged particles? This is really assuming that a plasma can be treated as a sort of cloud, a collection of charged particles — something closer to a gas. In reality, it is the fourth state of matter for a reason: it behaves differently, more like a fluid than many individual charged particles, and you need a full magnetohydrodynamical theory to understand it. The influence between the charged particles is very important; collective motions of the plasma can produce propagating and decaying “waves” of motion. Instabilities can even go beyond the realms of ideal MHD theories; with no theory to guide the experimenters, they were at a loss as to how to correct some of these problems.
This is what can happen when you attempt to force a complex system with many components that all interact with each other to do what you want it to do. Take running a current in a plasma to cause it to pinch. You succeed in your primary aim, to compress the plasma and heat it up; but you also get unwanted secondary and tertiary effects, with the plasma pinching in multiple locations and forming many small “links” — a chain of linked blobs of plasma called the “sausage instability” after what it resembles. Every way physicists seemed to attempt to manipulate plasma, it led to these secondary effects — and most of them were self-reinforcing, they grew rapidly, and ultimately destabilised the plasma before a self-sustaining reaction could get going. Just the fact that you had a small, flash-in-the-pan thermonuclear reaction going — this did nothing to change the fundamental plasma physics. Charles Seife, in his wonderful book The Sun In A Bottle, puts it well: “Physicists built bigger and more expensive machines to attempt to wrestle the plasma into submission, but they were just uncovering more and more subtle ways that the plasma fought against their will.”
Quoting from Joan Lisa Bromberg’s book on fusion:
Okay, so much for the pinch experiments. But what about Lyman Spitzer’s brilliant ski-lift brainwave, the figure-of-eight Stellarator with its complex, twisty magnetic field? Here, the whole point was confinement — attempting to contain a hot plasma with a complex magnetic field arrangement first and foremost as you bring it up to a controlled burn, rather than rudely and rapidly pinching it into instability and disintegration. Was this approach working better in the 1950s?
In a word, no. Plasmas suffered from problems in the Stellarator, too. It could achieve higher temperatures, but physicists saw “microinstabilities” — particles from the plasma being dumped onto the walls due to small, local perturbations. A slight difference in density or temperature throughout the plasma expanded rapidly and threw particles against the walls. This reduced the density, and it took heat out of the system via the kinetic energy of the particles. Since the whole point was to confine a hot, dense plasma, these microinstabilities — which resulted in the loss of particles and the loss of energy — were disastrous for the fusion project. The hotter the plasma got, the faster the particles were lost: a vicious cycle.
It was believed early on that the loss rate should be possible to reduce quite rapidly just by increasing the magnetic field strength used to confine the plasmas. The initial theory suggested that the loss rate would scale with the inverse square of the magnetic field strength — in other words, doubling the magnetic field strength would cut the loss rate to ¼ of its previous total. But these dramatic improvements failed to happen; even as successively larger machines delivered better magnetic fields. The particle loss rates didn’t drop anywhere near as quickly as the theory had predicted; confinement times weren’t improving, and after a fraction of a second, the particles would still spiral out of control, making sustained fusion impossible.
In the early days, Spitzer’s optimism had carried the day: he hoped that “At 1 million degrees Kelvin, the ionization of hydrogen should be complete, and the plasma may be regarded as an assembly of free charged particles. The phenomena should follow simple scaling laws; observations at 1 million degrees should allow for accurate prediction at one hundred million degrees.” But he was wrong about the plasma physics in the early 1950s — plasma was far more complicated — and he was wrong about the scaling laws: you could not extrapolate its behaviour simply to higher temperatures. Every new frontier in temperature and density brought new instabilities.
This increasing despondency was reflected in how Spitzer described his own Stellarators. The Model-A and B were research experiments. The first was to prove that plasma of a million degrees could be created; the second would prove that it could be confined for long enough to characterise its behaviour. Once that was done, the Model C would, at least in theory, heat the plasma to self-sustaining fusion reaction temperatures. The Model-C was supposed to be “partly a research facility, partly as a pilot plant for a full-scale reactor.” He had initially expected to have a working fusion reactor within sight by the end of the 1950s. Instead, the $24-million Model C was now described as “entirely a research facility, without any regard for the problems of a prototype” — and according to Seife, when it first came online, the only reason it was better than its predecessors was because it was physically bigger — which meant that particles took a longer time to crash into a wall. Things weren’t looking too promising for the Stellarator to produce sustained fusion any time soon.
A profound pessimism settled over the fusion community around this time, with the late 1950s and early 1960s known as “the Doldrums” in fusion research. By the time the early 1960s came around, gone were the glory days when fusion scientists, at least in the West, could get three alternative approaches funded with little difficulty. Many were struggling for the survival of their research groups, and they faced new competition from new generations of nuclear fission reactor: the fast breeder concepts. In nuclear fission, funding bodies saw a technology that had already been proved to work: fusion, on the other hand, had promised a great deal and not delivered.
It seemed that the fusion researchers had been immensely over-optimistic in their speculations that fusion would be possible within a decade. We’ve discussed part of why already — the incredible pace of development in physics and in nuclear technology in that era. But Joan Lisa Bromberg makes a very good point in her book:
“There was the postwar exuberance over the possibilities of technology, but there was also no one in 1952 who commanded all the bits and pieces of scientific and engineering knowledge required to make fusion generators. The disciplines of fusion physics and fusion engineering did not exist. There were astrophysicists like Spitzer who knew about the rare, cold plasmas of interstellar space. There were accelerator specialists like Tuck and Wilson who understood the careful design of magnetic fields and power supplies. There were cosmic ray scientists who had studied the behaviour of charged particles in magnetic fields. There were even weapons physicists who could contribute some rudimentary methods for measuring the rapid plasmas. But no one had an overview: the discipline of the nuclear engineer did not exist.” Constructing a working fusion reactor, in other words, would require a fusion of different disciplines in its complexity. Everyone could only see their own part of the whole. The result was that the optimistic predictions fell flat. And those in control of the purse-strings, especially in the US, were losing patience. “Is this not a very expensive way to get this basic knowledge? We can build these machines until the cows come home. I am wondering in my own mind, how long do you have to beat a dead horse over the head to know that he is dead?” said Senator John Pastore, on the Appropriations Committee… back in 1964.
To understand what woke the research community from its depression, we need to return once more on this show
[subtly play the USSR national anthem in the background?]
to the Soviet Union.
It was known in the West that the Russians were working on a nuclear fusion project. In 1956, for example, Igor Kurchatov — one of those scientists we mentioned in the episodes on the hydrogen bomb — came to the US and gave an astonishingly open talk that described their attempts to build something like a fast pinch fusion reactor. In 1958, there was a conference in Geneva with Soviet attendance, where a lot of fusion material was declassified. The Soviets at least appeared to be at a similar stage to the US at this point: they had pinch experiments that could produce neutrons, but there was still some uncertainty about whether or not they were thermonuclear in origin, and this was at any rate still an unknown distance from a working reactor. But in the wild imaginations of the defense-minded scientists at Los Alamos, the Russians could have anything: giant magnetic mirrors, mass-produced fusion reactors. Perhaps they had come up with some alternative reactor design that was superior to anything that had been conceived of in the West.
Well, as a matter of fact, they had. And, like many pieces of science history, its own little mythology has developed.
The story starts with a man called Oleg Lavrientiev. His father was a clerk, his mother was a nurse. He volunteered for the frontlines during the Great Patriotic War — known to us as World War II — and, after fighting the Nazis, ended up stationed on Sakhalin. To call this a bit of a backwater is an understatement. It’s the island off the Far East of Russia, just north of Japan. Seriously, it’s there — it’s bigger than Sri Lanka — but, aside from being disputed between Japan and Russia, not exactly pivotal in world history. It’s there that Oleg had plenty of time to study a subject he’d first been fascinated with in school — nuclear physics — and wrote directly to Stalin with some suggestions.
The letter to Stalin, naturally, didn’t get anywhere. Although listeners to our sister podcast, Autocracy Now — which has extensively covered the life of Stalin — will know that he wasn’t averse to answering letters. But another letter, to the Central Committee of the Communist Party, was identified as more than the ramblings of some crackpot. Oleg was onto something.
According to ITER’s website: “What junior sergeant Lavrentiev had devised in his remote posting were the blueprints of an H bomb and a concept to produce energy through controlled thermonuclear reactions.”
Lavrentiev was immediately taken off latrine-cleaning duty or whatever else one gets up to on Sakhalin, and given a private room and all the resources he needed to expand on these ideas. His paper eventually found its way to Andrei Sakharov, who wrote: I think we need a detailed discussion of comrade Lavrentiev’s draft proposal. Regardless of the outcome of the discussion now is the time to note the creative initiative of the author.
The obscure sergeant with no formal training and a nuclear physics hobby had sparked the Soviet H-bomb project and the Soviet nuclear fusion project. He left the army and quietly laboured at the Kharkov institute of physics working on nuclear-related problems right up until his death in 2011: it was only in 2001 that all of this information was declassified and published in a Russian physics journal, and the remarkable story of the genesis of Soviet fusion became public.
Loyal listeners will remember Andrei Sakharov from our episodes about the Soviet atomic bomb project. In those secret cities, the Atomgrads, where the Soviets worked on the properties of thermonuclear weapons, Sakharov was one of the geniuses who enabled them to catch up to the Manhattan project and, in some areas, overtake it. Like many scientists who worked on the atomic bomb, he felt the weight of moral responsibility — and the requirement for cooperation for the whole human race that the bomb, and our ability to destroy ourselves, implied.
“Thousands of years ago human tribes went through a fierce survival test; and in this competition it was not only important to be good at wielding a stick, but also be able to think, preserve traditions and selflessly help fellow tribe members. Today humanity is going through the same test.”
He later became a fierce and staunch advocate for human rights and a critic of the Soviet regime — winning the Nobel Peace Prize for his political efforts.
“At first I thought, despite everything that I saw with my own eyes, that the Soviet State was a breakthrough into the future, a kind of prototype for all countries”. Then he came, in his words, to “the theory of symmetry: all governments and regimes to a first approximation are bad, all peoples are oppressed, and all are threatened by common dangers.”
Sakharov came up with many of the same ideas as Edward Teller in parallel during the development of the nuclear fusion bomb — the idea of multiple layers of fissionable and fusionable material, for example, and eventually his own version of the Teller-Ulam device (which was called Sakharov’s Third Idea in the Soviet Union.)
Yet also, as early as 1950 — around the same time that Ronald Richter was making his wild promises to Juan Peron — Sakharov was thinking about magnetic confinement fusion, a controlled reaction for the production of energy. Fortunately for history, his design was slightly different to those pursued in the West.
Broadly, so far, we’ve discussed two types of fusion design. The pinch machines ram a current through a plasma which compresses it — ideally, hopefully, to sufficient temperatures and densities to cause it to fuse. But in practice, plasmas produced this way tended to succumb very quickly to instabilities with no hope of getting out more energy than you put in. Meanwhile, the stellarators use an arrangement of magnetic fields to attempt to confine the plasmas for a very long time. Rather than crushing the plasma, the stellarators attempts to hold onto it — but the stellarator scientists in this era were seeing individual particles diffuse outwards and hit the walls far faster than they had anticipated.
Both are attempting to satisfy this Lawson criterion — the idea that for a working fusion reaction, you need to focus on the triple product of density, temperature, and confinement time. Stellarators were able to produce higher confinement times, but suffered from diffusion — and it was more difficult to keep the plasma dense and hot. Pinch machines could produce impressive densities and temperatures, but instabilities rapidly caused the “crushed” plasma to fall apart or radiate its energy away.
Sakharov’s idea used a donut-shaped torus with coils of wire that induced magnetic fields to establish a loose hold on the plasma. This is a little like the stellarator. But another set of coils use fluctuating magnetic fields to induce a current inside the plasma — much like a pinch machine. In a way, then, the device is something like a combination of the pinch machines and the stellarators: a current is driven through the plasma that heats it and crushes it down to a lower density, while the coils of the device also act to confine the plasma for a longer time. The device was called the toroidalnaya kamera ee magnitnaya katushka…. Or tokamak for short.
The plasma current is required to confine the plasma for any length of time, by acting to prevent it from diffusing outwards like it would in the stellarator through this pinch mechanism. But the plasma current also makes the behaviour of the plasma more complex — and, if it’s ever interrupted, the plasma is flung out in all directions. These events — known as disruptions — can occur when magnetohydrodynamical instabilities prevent the current from flowing coherently through the plasma. Since the current is doing a great deal of the work confining the tokamak’s plasma, this disruption can be an extremely violent event and can even damage the tokamak. To quote a paper about the upcoming device, ITER, “A lot of energy ends up in the wrong places” when a disruption occurs and plasma is shot in all directions. According to Seife, one disruption at the JET experiment in Oxford caused the entire device to jump a centimetre into the air.
Other recent papers have even suggested that, if major disruptions occurred in the new Tokamak, ITER, they could either destroy the machine or render it non-operational for months. In fact, according to some people, the real mission of ITER is to demonstrate a “disruption control system” that studies and shows that the tokamak — and modern engineering — can deal with the consequences of a big disruption with the huge currents and plasma energies that might be required to produce more power from a tokamak’s fusion than you put in.
There are various different methods of trying to deal with disruptions, including using large amounts of shielding on the internal components of the tokamak to mitigate the damage that they cause — or adjusting the magnetic fields to attempt to prevent them entirely. Ultimately it looks like a combination of both would be required for a successful power plant. Even after years of operating some of the experimental tokamaks, around 10% of runs result in disruption, so it may be the case that fusion engineers and scientists have to learn to live with them, diagnose and combat them as they happen.
Now, this is very controversial — and there’s always what you might call some fusion politics to negotiate when you read anything about any particular type of fusion, because everyone has a favourite design and pretty good reasons for having a favourite design, but if it is the case that there’s some non-zero probability of your entire power plant breaking down for two months or needing a complete rebuild due to a disruption event, it’s obviously a concern for people designing tokamaks today and for the design ever to be viable at all.
But we’ll return to all of this later. For now, for us, the Tokamak is just an idea in Andrei Sakharov’s head. Next episode, we’ll talk about the early Soviet experiments with the tokamak, the eventual and remarkable trip by Western physicists to the USSR at the height of the Cold War to look at their research, and the Tokamak Revolution. Meanwhile, one of the new diagnostic instruments that those scientists took with them to assess what progress had been made with the Tokamak would have its own, unique place in fusion history: they had a laser.
Thanks for listening etc.
Nuclear Fusion: The Tokamak Revolution
Hello, and welcome to Physical Attraction. In the last episode of our ongoing nuclear fusion saga, we talked about how z-pinch machines and stellarators were both tried by scientists in the West in the 1950s and 1960s — but both suffered from various kinds of plasma instability. The stellarators could confine plasma reasonably well, but struggled to heat it to the necessary temperatures and densities before the particles diffused away to the sides. Pinch machines could heat plasma to much higher temperatures, but instabilities prevented them from confining plasma for any length of time. In the UK, the ZETA experiment had caused a great deal of hype surrounding nuclear fusion, but its neutrons turned out to be from collisions and not thermonuclear fusion reactions — and even when these were achieved that same year, instabilities plagued the project. Bigger machines, bigger budgets, and disappointment led to the doldrums of the 1960s.
Meanwhile, in Russia, a design that combined aspects of the stellarator and the pinch machine — the tokamak — was being developed. The initial spark arose when a bright sergeant called Oleg Lavrentiev wrote to Stalin with his idea for a fusion power souce: eventually, Sakharov and others came up with the initial idea for the tokamak. His early calculations suggested you might be able to achieve nearly 1GW of power — that’s more than two normal power stations and on a part with the largest solar array — with a machine of radius 12m, with a magnetic field of around 5T, which was not unreasonable at the time. More ambitious were the temperatures required: a cool (not especially) billion degrees Celsius.
Some of the earliest people to work on the theoretical physics of plasma and the magnetic fields were those very same people we discussed in earlier episodes — people like Kurchatov and Sakharov who were being detained in Atomgrad-type technical prisons, more or less forced to work on the bomb. In fact, it was the sadistic head of the NKVD Beria who first signed off on any funding going towards controlled thermonuclear fusion experiments. Amazingly, the exact same phenomena happened in the USSR as it did in the West — when the, um, amateur scientist Ronald Richter and Juan Peron claimed to have cracked fusion, Kurchatov quickly took advantage of the press attention and the situation to propose a similar project in the USSR.
Lev Artsimovich is one of the scientists really credited with helping to make that tokamak into reality. Over the next decade or so, experiments took place in small tokamaks — the largest only around a metre in size — but with successively higher magnetic fields. I’ll stick to units of Tesla for measuring magnetic field strength, even though 1 Tesla is a pretty big unit. A fridge magnet is roughly around 5 thousandths of a Tesla, an MRI scanner will go between 1–3 tesla, the LHC is at 8T and the world record for any stable magnetic field is 45T. [This is nothing compared to some crazy astrophysical objects like magnetars, which can get up to billions of Tesla, but we can’t exactly bring one down for use in earthly tokamaks. If we could, we’d just get a better fusion reactor: a star.]
So these early Russian tokamak experiments were operating with machines about 1m across, with magnetic field strength around that in an MRI scanner. But even these small devices were enough to reveal and correct some early instabilities, and by 1962, they’d already discovered the “disruption” instability we mentioned. A tokamak’s plasma is kept stable by the current that runs through it and pinches it, but if that current is interrupted even for a microsecond — say, by fluctuations in the motion of the plasma — you can get a disruption that violently throws the plasma into the walls of the machine, sometimes causing a great deal of damage. Amongst the other issues they discovered were those of impurities in the plasma. Try as you might to ensure that your plasma is entirely hydrogen/deuterium fuel, inevitably ions of other species will get in there — including beryllium, carbon, oxygen, neon, argon. The issue is that some of these elements won’t be stripped of all of their electrons — for example, to strip argon of every electron requires more than 3 kilo-electron volts of energy: or 34 million degrees Celsius.
Any less than that, and instead the remaining electrons in argon will get “excited” into higher energy levels due to collisions with other particles in the plasma. They can then rapidly “de-excite”, emitting photons of radiation, and hence energy escapes the plasma. So the little impurities acted as highly efficient cooling devices for the hot plasma. This frustrated early efforts to heat the plasmas to fusion temperatures, as 80–90% of the energy supplied to early tokamaks could be radiated away by impurities. At the same time, electrons can be captured by these nuclei in a process called recombination, which also results in radiation of photons and loss of useful energy from the plasma. And, finally, these are also charged particles accelerating due to the magnetic field. Charged particles when accelerated emit radiation. You can live with that as a fusion reactor, providing you’re producing enough energy through fusion to replace that lost to this braking radiation. But the impurity ions have virtually no chance of fusing in these temperatures; all they’ll do is drag the temperature down.
Another important discovery during the early days of tokamaks was something called the “Safety Factor.” To explain this, I’ll need to get into the physics of charged particles moving in magnetic fields a little more. We’ve talked about how the magnetic force pushes particles in a direction perpendicular to their velocity — as if towards the centre of a circle. It’s this perpendicular acceleration that causes charged particles to rotate in orbits around the magnetic field.
This is true if you imagine the particles in a single, 2D plane — as if rotating on a piece of paper, with the force directed to the centre. But what about perpendicular to the plane — along the lines of the B-field? There is no force here. So if the particle has any velocity in that direction, it will continue to drift.
This means that the natural orbit for a charged particle in a magnetic field is actually a helix — like DNA — orbiting around a point that slowly drifts, spiralling and moving forwards, twirling towards freedom.
So the particles in a plasma — until they collide — are orbiting around the centre of the tokamak, moving in a circle — but they’re also drifting around that circle, following a helix. If you imagine a helix that bends around into a circle, you’ll have an idea of the basic trajectory for these particles.
What the Soviets discovered was that, if the particles drifted around the circle more quickly than they orbited the centre of the tokamak, you saw fewer instabilities. Specifically, the kink instability — the one where bulges of plasma became exaggerated over time and gradually hit the walls? That was defeated, providing you could set things up so that the particles drifted around the circle more quickly than they orbited in the tokamak’s big, donut-shaped chamber.
This, however, meant that you needed to either have more powerful external magnets — or a reduced pinch current in the tokamak. But it was a tradeoff that worked for stability.
Many of the little additions made to improve the tokamak followed a similar pattern to the development of fusion reactors in the West. Remember the guys who took some conducting metal and put it on the inside of the torus, hoping that it would suppress instabilities by allowing a charge to build up and repel chunks of plasma as they drifted away from the main bunch? This was also an improvement made to the tokamak.
By the mid 1960s, the Russians were getting impressive results. It may only have been for milliseconds, but the tokamak was capable of confining plasma for ten times longer than any other machine, and at far higher temperatures than the stellarator could achieve.
When the Russians first announced these results, it caused a great deal of debate in the fusion community. The reason was all to do with this theoretical formula that those in the West had developed. You’ll remember that the main problem with the stellarator was plasma diffusion.
Diffusion is a pretty universal phenomenon in physics. If you’ve ever sprayed a can of aerosol in a room, you’ve seen diffusion in action. The metaphor they used with us at University was that of drunk students staggering away from the pub at closing time. Under classical diffusion, particles collide with each other as they move. Every time that happens, the direction they travel is randomised — a so-called “random walk”. The result of a random walk is that, gradually, over time, you move away from where you started. In fact, assuming you move at a constant speed, and over a very great number of steps, the distance you end up from where you started is proportional to the square root of the time that’s passed — the square root of the number of steps that you take. This is the same formula that gradually allows dye to spread through water, or aerosol droplets to spread through air: quickly at first, and then more gradually.
I think this makes intuitive sense if you consider stumbling around randomly. Initially, you might stagger quite quickly away from the point that you started from — after all, your first step will always take you a full step away, and the second one is quite likely to cause you to move further. This corresponds to the steep slope at the start of the square root function. But, gradually, over time, you’re less and less likely to move directly away from your starting point. The square root of 25 is 5, the square root of 100 is 10, and the square root of 400 is 20: obviously, the square root grows slower and slower as time goes on — although it does always increase. Diffusion and random walk are far less efficient than running away: although if you stop to solve the diffusion equation, you probably will be eaten by that dinosaur.
Early plasma physicists had hoped that diffusion would be totally classical for the plasma. This would mean that, due to the way collisions depended on the field strength — which determines how quickly the plasma rotates around the centre of the tokamak, amongst other things — the diffusion rate would be proportional to 1/B² (field squared.) Instead, from analysis of many of the plasmas that had been generated in experiments, plasma physicists thought that turbulence in the plasma — its interaction with the electrical and magnetic fields — were causing diffusion to occur much more quickly, and scale as 1/B. In other words, for a given magnetic field, the theory — called Bohm diffusion — predicted that plasma would diffuse out of the tokamak far more quickly than could be expected.
This would mean that confinement would require impossibly high magnetic fields, or impossibly large reactors — and certainly, achieving what the Russians had claimed to with the tokamak would be impossible. Spitzer in particular had concluded that the Bohm diffusion must be true for all plasmas, and this in part led to the great pessimism of the 1960s — perhaps fusion wasn’t possible with magnetic confinement at all.
This Bohm diffusion formula was an empirical law — it came principally from observations, but didn’t yet have a full theoretical explanation until the 1970s. For this reason, it wasn’t impossible that there might be some way to get around this apparent diffusion limit. It wasn’t derived from some universally accepted law of physics. But it did seem to hold pretty well for all the machines that the Western scientists were using; which made them very sceptical of the claims about the Tokamak. Sometimes laws start off as empirical — just observations, like “things tend to fall down”, and you later discover the more fundamental theory that means they’re always true. Sometimes, though, there are ways to get around them. When the Soviets initially announced their tokamak results in 1965, Bohm diffusion seemed to be an iron-clad limit — although results at the Model C stellarator and using a new kind of multipolar field for plasma confinement seemed to suggest that the Bohm limit wasn’t universal. Nevertheless, the US scientists in particular underestimated the Russians — they were still sceptical of the idea that they’d produced machines that were an order of magnitude better than the US had managed.
Things came to a head in a pair of international plasma physics conferences in 1968. The Russians claimed to have heated their plasma to 1 keV, or 10 million degrees Celsius — and to have confined it for 50 times the supposed limit due to Bohm diffusion. There’s a bit of a mirror image here. The Russians also worked on pinch machines at the same time as the US scientists, and even came over in the 1950s to warn that sometimes you’d see neutrons that weren’t the result of thermonuclear fusion. This was before the embarrassing ZETA episode where the scientists had to admit they hadn’t actually achieved fusion at all. Now, Spitzer was telling the Russians that they couldn’t possibly be right, and warning them that their means of measuring the plasma temperatures were inaccurate.
How do you measure the temperature of a plasma, after all? You can’t stick a thermometer into it that goes up to ten million degrees Celsius and then read it in a millisecond. The Soviets were still diagnosing the temperature of the plasma from properties of the magnetic field. In any plasma — in any gas that’s in equilibrium — the temperatures follow what’s called a Maxwellian distribution. It’s a lot like a bell-curve with a very long tail at the front. There will be a small number of particles, by hook or by crook, that are substantially faster than all the others. Spitzer reckoned the Russians were just measuring the temperature of these anomalously hot electrons, not the centre of the distribution. The Russians insisted this was the temperature for the whole plasma.
Meanwhile, the British had been working on ways to avoid the embarrassments of ZETA. The reason they had been deluded into thinking that they might have attained fusion was partially down to the fact that they didn’t have a good enough idea of what was going on in the plasma. They were relying on measuring neutrons as a sign that fusion had been achieved, but they could be produced in other ways. To understand the plasmas, and to use their experimental results, they needed more accurate measurements.
In the late 1950s, the laser was invented. I won’t get into the details of how it operates too much here, because we’ll have to do a whole episode on this, but suffice it to say that lasers produce intense monochromatic light. And when I say monochromatic, I mean monochromatic. There are a range of wavelengths that will look like more or less the same shade of red to you or I, but the laser produces light at almost exactly a single wavelength. This is because all of the photons come from the same atomic transition — and so they are produced with exactly the same energy, and the energy of light determines its wavelength and — in the visible spectrum — its colour.
This is important, because it means that you can use slight changes in the wavelength of the laser light as a means of measuring temperature. When you shine a laser through the hot plasma, some of the electrons collide with the photons of the laser and pass on energy — this is what’s called inverse Compton scattering.
If the photons and particles were left to collide with each other long enough, in some imaginary box, eventually they’d all reach the same temperature. As it is, the photons end up with slightly more energy than they had originally after passing through the plasma. By using a laser that’s “less hot” than the plasma, you can measure the increase in energy from the laser photons after these collisions, and determine the temperature of the plasma that way.
Bear in mind that this is 1969 — at the height of the Cold War. The Vietnam proxy war was still going on; the Soviets had rolled into Czechoslovakia the year before. Nevertheless, Artsimovich — presumably at some personal and political risk — requested that they send five British scientists — later known as “The Culham Five” — to the Soviet Union, with their lasers, to settle the dispute once and for all. There’s actually a memoir by one of the Culham Five. Dr Forrest wrote a book — Lasers Across the Cherry Orchards — about his experiences. When they came up with a new way of measuring the temperature of plasmas, they could hardly have expected to be invited across the Iron Curtain. The decision was so sensitive that the UK Cabinet had to clear the trip by vote. After all, these people knew Britain’s nuclear secrets. At the same time, they were about to work very closely with the top nuclear scientists from the USSR — at the height of an era defined by nuclear weapons secrecy.
Nevertheless, for the good of science, they went. Dr Forrest says he knew they were being monitored continuously by the KGB, who let them know of their presence in subtle ways — for example, by changing the lightbulb when the British scientists discussed it burning out. [I’m reminded, in modern Russia, of US Ambassador Michael McFaul’s anecdote of being left a copy of the Russian version of the Kama Sutra after his wife came to visit.]
While getting used to day-to-day Russian life and to the “startled looks” they were getting when they walked through the Moscow streets, the British team set about adapting the equipment they had designed to the novel tokamak machine. This meant opening up the torus to fit “windows” and a collimator, setting up vibration-proof optical benches for the laser and aligning its beam through complex optical systems. The activity, remembers Mike Forrest, was “frenetic” and because of the inevitable technical problems, the Russians were growing impatient. “Nerves,” he says, “were getting frayed.”
This did not prevent the Russian hosts, both high officials and the “man on the street,” to be exceptionally kind to their foreign guests. “We got the VIP treatment from the authorities: Bolshoi Ballet, Swan Lake and Sleeping Beauty of course, but also the rare privilege to visit the Treasury in the Kremlin and see the Russian Crown Jewels. Colleagues invited us for ski outings and when we were by ourselves, in the Moscow crowds, people would stop us and warn us about early signs of frostbite. In trams and buses, they would go out of their way to get correct change for us …”
Aside from some playful monitoring from the KGB, though, the expedition was a scientifically productive one. They shone a laser beam at the Tokamak’s plasma, and soon realised that the Russians were telling the truth. The plasma had reached temperatures of more than ten million degrees Celsius; the tokamak design could confine plasma for a far greater length of time.
This moment was pivotal in the history of nuclear fusion. There had been pretty strong pressure in the scientific community in the West to start building tokamaks, which seemed to produce these undeniably superior results Almost immediately, the older designs were sidelined. The race from the early 1950s was back on, but this time, it was all about tokamaks. Before long, there were dozens of designs — the Texas Tokamak, the Doublet Tokamak, Altercor, Ormak, the Symmetrical Tokamak. The last of those was Lyman Spitzer’s beloved Model C Stellarator, which was hastily transformed into a Tokamak.
There was a great deal of debate at the time about whether or not this was the right way to go about things. After all, it was coming up to twenty years since the first optimists had suggested we might have fusion working in twenty years. Funding was available, but not “no-questions-asked” like in the early days — and the expectations around how much it would cost had skyrocketed. Given that, did it really make sense to build half a dozen slightly different tokamaks, rather than exploring all of the possible routes to fusion? Just because tokamak devices were performing the best *at that moment*, it didn’t necessarily imply that they wouldn’t ultimately lose out to stellarators, or modified pinch devices, or even some design that no-one had yet thought of. On the other hand, it was becoming clear that a monumental effort would be required for magnetic confinement fusion to work. If funding, science expertise, and time was divided between projects, maybe none of them would ever succeed. This debate is still going on today — and, yes, it’s contentious.
To an extent, the political nature of scientific funding — especially in the United States — came into play here. It’s far more persuasive to tell someone outside the fusion field “our latest device broke all records for confinement time” than saying “our latest device, although not as good as what the Russians use, allowed us to greatly improve our knowledge of plasma physics.” Which group do you think will get the money?
So the fusion community faced this dilemma: diversify, and risk being too slow and losing your funding; or throw your eggs in the tokamak basket and hope that the most promising-looking route did indeed pay off. And, by and large, in the 1970s, they chose the latter.
And not only in the US. Before 1969, there was only one tokamak outside the Soviet Union. Since then, tokamaks have been built in over 29 countries, including the USA, Japan, Britain, China, India, Korea, Iran, France… There have been over 200 tokamaks constructed in total.
And, in fact, although our understanding of plasma physics and fusion physics has advanced and the design has become improved and more complicated, it is essentially the tokamak concept that we still use today. The vast ITER fusion experiment under construction in the South of France is a tokamak. This is not to say that people don’t still build and experiment with pinch devices, stellarators, or even other types of machine. But the vast majority of work in magnetic confinement fusion over the last fifty years has used tokamaks. Right now, in the ITER project, the future of nuclear fusion may very well depend on the biggest tokamak ever constructed.
But next episode, we’ll focus on something else that really began to spring up in the 1970s, alongside punk music and Monty Python. Because the lasers that the Culham Five used to confirm the temperatures achieved by the Tokamak would soon provide an alternative to using magnetic fields altogether. The show will branch off into two branches — we’ll continue to explore magnetic confinement fusion, but, in parallel, we’ll look at inertial confinement fusion. Which means frickin’ lasers.
[Can finish episode here]
Nuclear Fusion: Frickin’ Lasers
In the last few episodes, discussing the intricacies of magnetic fields and plasma instabilities — the triumphs and the tragedies of the 1950s and 1960s in magnetic confinement fusion research — it can be easy to forget that humans had already liberated vast quantities of energy from nuclear fusion on Earth.
In fact, the first time it happened was on November 1st, 1952 — the Ivy Mike H-Bomb test, which released so many neutrons that two new elements, Einsteinium and Fermium, above the wreckage of the atoll which was destroyed. In its place is a crater two miles across and deep enough to hold a 17-story building. Only 23% of its energy actually came from the intended fusion of the deuterium fuel: even that is the equivalent of utterly annihilating 111 grams of matter. Or, in more relatable terms, enough to power the entire country of Mongolia for a year, or equivalent to the output of a standard 500MW power station running continuously for 231 days.
In other words, even as early as 1952, humans were releasing enough thermonuclear power to compete with conventional power stations. The only problem was that the only feasible way to release it was as a hydrogen bomb.
You’ll remember from our episodes on the hydrogen bomb — Edward Teller’s attempts to develop it, and the eventual success of Ulam’s design — the basic principles of how the bomb works. A primary, fission nuclear bomb explodes with its standard chain-reaction: the X-rays that are then produced as a result of that explosion travel down a tube into a chamber where they’re used to compress a secondary pellet of deuterium-tritium fuel. Crucial to this is focusing the X-rays onto the pellet in a completely symmetric way, which requires a careful design of the device. This then causes that capsule of fuel to rapidly implode, briefly reaching incredibly high temperatures and densities — enough for the nuclei to overcome the electrostatic repulsion between the protons, and finally get close enough for the strong force to pull them together and fusion to take place, releasing energy.
In the episodes on Teller, we even talked about the various schemes- some more hare-brained than others — to use nuclear bombs and hydrogen bombs for civilian engineering schemes, like carving out space for a port in Alaska or cutting a new Suez canal to resolve that rather tricky political crisis.
Of course, you might ask — why not just use the hydrogen bomb as a power source? After all, you’ve already devised a readily available way of liberating thermonuclear energy. The only problem is that it has this rather nasty tendency of destroying everything. In fact, it’s very difficult to create nuclear bombs below a certain yield. The problem is that if you only have a very small amount of fissile material, you won’t be able to cause a self-sustaining chain reaction — the neutrons will escape too quickly without producing that critical chain-reaction. If you use control rods, as in a conventional fission power plant, you can control the neutron flux to get a sustained, constant burn.
But then you won’t produce a sufficient quantity of X-rays to allow you to compress a fuel pellet. Ideally, you would like to produce the powerful X-rays without the intense explosion that accompanies them; then you could use the energy released in the ignition of the fuel capsule and its thermonuclear fusion reactions to drive a turbine, just like in a conventional power-plant but with thermonuclear fuel rather than fossil fuels or enriched uranium.
This minimum feasible yield for a nuclear bomb was still far too explosive to use in this way. But that didn’t stop Teller and his fellow, um, inventive fanatics, from coming up with ideas. Project PACER, for example, dreamed of detonating nuclear bombs in underground cavities filled with water. The nuclear bomb vaporises the water into steam, which then drives turbines.
Of course, the main attraction of this scheme for Teller was that it gave him an excuse to make ever more nuclear weapons. Early tests of Project Pacer demonstrated that it was little more than a very expensive way of making a radioactive-hell pit filled with fissile material. Even if the scheme was successful, it would still have turned out to be more than ten times more expensive than the conventional nuclear fission power plants that were being used at the time. Worse still, in some of the early tests, the cave system partially collapsed and radioactive steam was blown through rock vents far from where they’d intended. Finally, in 1975, the hare-brained scheme was more or less abandoned — although, amazingly, people have continued to look into it since by constructing artificial cavities underground that might be less prone to collapse.
Nevertheless, people were still looking into what was slowly becoming known as “inertial confinement fusion.” You’ll remember from the episode before discussions of the Lawson criterion — the idea that you can get a net gain from fusion reactions as long as you’re able to get a high enough value for:
the heat of the plasma * the density of the plasma * the confinement time of the plasma.
The magnetic confinement strategy is to confine plasma for a long time — perhaps, ultimately, many minutes — and thus attain a self-sustaining fusion reaction with lower temperatures and densities. The inertial confinement strategy is more or less: “hang the confinement time, that’s way too difficult: just blast the damn thing so that it’s incredibly hot and dense.” After all, the plasma will remain physically close together for as long as it takes to be blown apart (if that makes sense.) The “inertial confinement time” is the amount of time it takes for the capsule to explode. So, if you’re capable of achieving sufficient densities and temperatures during that time, you might just get enough fusion reactions taking place in your plasma to produce more energy than you put in. This is called ignition.
Early calculations suggested that you could get more energy out than you put in with a relatively small amount of fuel — perhaps just a few milligrams, or thousandths of a gram. It would only, theoretically, take a small amount of energy to ignite this fuel pellet — far less than you’d release by setting off a fission bomb. The fusion reactions would release about as much energy as burning a barrel of oil — easy enough for humans to harness without destroying the power plant every time you ignited the fuel.
The only problem, then, was how to ignite this tiny fuel capsule. As you’re probably expecting already, the early theoretical calculations gave rise to a great deal of optimism. In the 1950s, early computers were simulating the implosion of a fuel capsule made of deuterium and tritium — and they seemed to suggest that delivering 5MJ of energy to the capsule could result in 50MJ being released, a gain of a factor of ten. Later research on different capsule geometries showed that using a thin, cylindrical shell around the fusion fuel would reduce the energy requirements even more.
At the same time, two different branches of inertial confinement fusion were proposed. These are direct drive and indirect drive. In direct drive, you compress the fuel pellet directly, causing it to implode. In indirect drive, you instead supply vast amounts of energy to a container for the fuel pellet. This container is called a hohlraum: it’s like a metal shell. In exactly the same way as the metal shell in a hydrogen bomb ensures that the fusion fuel is bathed in uniform X-rays from all sides, the same is supposed to happen in the hohlraum — when it’s heated to extremely high temperatures, it radiates nice symmetrical X-rays onto the inner pellet, causing it to implode and hopefully release energy. Now the only issue was heating that hohlraum.
Plenty of ideas were suggested. Carl Friedrich von Weizsäcker, along with Hans Bethe, was one of those physicists who had worked out precisely how the Sun’s nuclear fusion worked. You may also remember him as one of the physicists who was taped in post-war recordings discussing the Nazi atomic bomb project, and the reasons for its failure. At a meeting he hosted, in the 1950s, an idea was discussed to ignite a thermonuclear fusion fuel capsule using shock waves from conventional explosives. If the shock waves could be made sufficiently symmetrical, they might be able to compress the capsule enough to result in fusion.
Later, in 1964, the physicist Friedwardt Winterberg suggested a scheme he called “impact fusion” — the capsule could be compressed using very small microparticles, which would be accelerated to 1000km/s and slammed symmetrically into the fuel capsule. A few years after that, he was working with electrically charged ion and electron beams, which could be accelerated very quickly using certain kinds of electrical circuits and could then slam into the fusion fuel capsule. But none of these experiments were capable of igniting the fuel.
It was the invention of the laser in the 1960s that really made inertial confinement fusion feasible; with lasers, you could hope to achieve the kind of spatial and temporal coherence that you needed to drive this implosion. Fiddling around with shock waves or actual, physical particles was just too difficult: not enough power, too much variability and turbulence in the way the energy was delivered to the shell. Lasers could provide a very powerful means of heating the fuel capsule, or even directly providing enough radiation to implode the fuel capsule.
All of this research began in the utmost secrecy for the various governments and scientists undertaking it — after all, we’re talking about the same secrets behind the development of the hydrogen bomb. But in 1972, in the spirit of scientific collaboration, one of the pioneers of inertial confinement fusion in the US — John Nuckolls — was allowed to publish a paper in Nature — carefully composed so as not to give away any military secrets — that outlined the ideas behind inertial confinement fusion.
“Hydrogen may be compressed to more than 10,000 times liquid density by an implosion system energized by a high energy laser. This scheme makes possible efficient thermonuclear burn of small pellets of heavy hydrogen isotopes, and makes feasible fusion power reactors using practical lasers.”
One key point that was identified by the Nature paper was why the laser was such a game-changer; it’s all to do with the pressure that can be achieved. The strength of chemical bonds prevents you from obtaining too much pressure by physically pushing on objects — eventually, the bonds just snap under the pressure. Diamond, for example, can be crushed under a hydraulic press. Above around a million atmospheres, it’s just not feasible to generate pressure by physically forcing objects together. Explosions can produce more pressure — as in the shockwaves that Winterberg was experimenting with — but they’re also much more difficult to control.
In the case of lasers, however, you can take advantage of the fact that photons carry and transfer momentum. Yes — even when you turn the light on, the photons from that lightbulb exert a small pressure on you. In fact, because the momentum carried by a photon is just its energy divided by the speed of light, the pressure due to radiation follows a beautifully simple formula. It’s just the intensity of the light divided by the speed of light. So let’s say you’re 1m away from a 40W bulb. That energy spreads out across a sphere with a surface area of 4pi r², giving around 3.2W/m² as the light intensity when it hits you. Divide by the speed of light, which is 3 x 10⁸ ms^-1, and you get the pressure as around 10^-7 Pascals (or Newtons per m²). That’s less than a trillionth of the pressure you’re under all the time due to the atmosphere, so even if you filled the room with lightbulbs, you’re unlikely to notice radiation pressure. However, interestingly, it is 100x the atmospheric pressure on the Moon, which has virtually no atmosphere whatsoever. Now, next time someone switches the lights on, you can go “Gah! I’m feeling an additional pressure of a hundred lunar atmospheres!” Isn’t Fermi estimation fun?
Lasers offer a source of incredibly collimated, incredibly concentrated photons that can deliver an awful lot of energy — and, hence, momentum and pressure — to a very small spot. Remember, *intensity* is what counts for photon pressure, and not the total energy. If you could deliver the 40W power of that lightbulb to an area of a square nanometre, it would push down on that nanometre with a pressure comparable to that at the centre of the earth, over a million atmospheres. This is precisely the kind of thing you can do with lasers: Nuckoll’s paper suggested that even the lasers of the day could blast the outer capsule with a pressure of 100 million atmospheres. The pulse need only last a nanosecond — yet it would be sufficiently intense to create immense pressures. The capsule would then implode inwards — the material of the capsule suddenly accelerated to a few thousandths of the speed of light, or three thousand times the speed of sound: and this would then result in pressures of a trillion atmospheres — comparable to the pressure at the heart of the Sun — and, hopefully, nuclear fusion.
Now the race was on to be the first to generate a fusion reaction this way. Both Russian and French scientists were reporting that they’d seen thermonuclear neutrons from pellets hit by lasers, but the imagination of the American public wasn’t really captured until one of their own got involved.
In the early 1970s, one of the people working on this was Kip Siegel. He founded a company, KMS fusion, that worked towards getting inertial confinement fusion to work. This was pretty unusual: after all, most nuclear physics had been done purely under the watchful eyes of the government and the Atomic Energy Commission. Inertial confinement fusion in particular was not something that they wanted to be privately owned: after all, a major motivation for funding ICF is that everything you develop is dual-purpose. If you can ignite a fusion reaction with symmetrical laser pulses, well, add a little more fuel and a little more power and perhaps you have a nuclear bomb. Not just that — a nuclear bomb “to order”, where you can set the size and scale in advance. Weapons developers were interested in creating small-scale versions of nuclear explosions at any rate — they could provide a good way of testing the effects of the particles and radiation from a nuclear explosion on military hardware. And, of course, developing incredibly powerful lasers for weapons research has always been a fascination among military types — even though, in all honesty, the use cases haven’t really justified it so far.
By 1974, and despite the protestations of the government, KMS was ready to make their big announcement: they, too, had seen neutrons from thermonuclear reactions. The New York Times called it “a significant step towards the goal of nuclear fusion as an almost limitless supply of energy”.
Siegel’s result was really just neutrons with a strong suggestion that they had arisen from nuclear reactions — a far cry from any claims of being able to generate power. But it put Lawrence Livermore Laboratory, which still boasted of associations with Edward Teller, to shame.
It helped that Siegel was a good salesman, with a good story that lined up nicely with the American myth. Here was an entrepreneurial lone wolf who might just be able to beat the Soviet and US Governments to the target. He believed in “”the lesson of the Cavendish Laboratory (Cambridge, England), where a few bright people outinvented the world for a long period…with wires and chewing gum” and managed to attract several of the best and brightest US physicists to the startup company. The fact that private industry was starting to take an interest in fusion was a good sign to the public — who, by now, had heard the “fusion in ten years” line for twenty years — that the experts believed fusion reactors could be profitable and just around the corner: a viable and imminent source of energy. The government might be able to invest in blue-sky research, but companies need results — at least, so the thinking went. And although few serious fusion experts would have believed the timescales that Siegel was proposing for commercialising fusion, the mere fact that he was suggesting it was possible put pressure on the magnetic confinement programmes and the government to make similar promises.
Of course, it came at a great time to be involved in energy research in the US. The early 1970s saw the oil crisis, where OPEC members cut off oil supplies to the US, and the gasoline price skyrocketed. This motivated a huge increase in investment for alternative sources of energy — from a geopolitical point of view, the US realised that it couldn’t depend on OPEC. This was the decade that the Department of Energy was created. Solar panels, for the first time, were developed for use as power sources on Earth rather than just niche space-based applications — in fact, if you look at any graph of solar cell efficiency development, it begins in the 1970s. Wind turbines were further developed and deployed. This took place across the US and Europe — every nation affected by OPEC’s decision — and, in many ways, the 1970s was the birth of renewable energy (aside from hydroelectric power) as a serious force that people could dream would one day take over.
(Incidentally, it drives me up the wall that one of the greatest times for funding for renewable energy projects owed to OPEC turning off the tap. The fact that fossil fuels are finite and will run out, and that they destroy the environment and cause climate change, should be enough to motivate this level of effort and interest *the entire time*. The fact that depending on oil-rich economies is sometimes geopolitically bad should be way, way down on the list of motivating factors for developing new, renewable and clean sources of energy. But I suppose everyone who wants humanity to explore the stars has to deal with the same idea — that a great deal of spaceflight development arose out of Cold War competition and not due to the grand vision of what the species can achieve. We might be smart but we’re not wise.)
It was in this milieu of trying to reduce dependence on foreign oil that a great deal of additional investment into nuclear energy — including nuclear fusion — took place. The magnetic confinement fusion budgets skyrocketed from $30m a year to $300m a year in the space of just seven years. The budget for inertial confinement fusion skyrocketed from virtually nil to $200m a year by the end of the 1970s.
Sadly for Kip Siegel, he wouldn’t live to see the huge inertial confinement fusion projects that he had helped to trigger. In the midst of testifying about the promise of laser fusion to Congress, he suffered a stroke and died shortly afterwards. His company, KMS, still existed all the way up until 1990 — and if you’re interested in its history, you can find a testimonial of someone who worked there on the Kip Siegel page at Fusion For Freedom. Little did they know that their idea — to drive inertial confinement fusion with infrared lasers — would ultimately prove unworkable. The catchphrase in the company, “Online by ‘79”, proved to be just another over-optimistic fusion estimate.
Yet he was a pioneer in several ways — both demonstrating that inertial confinement fusion could produce thermonuclear reactions with lasers, and also in paving the way for future entrepreneurs to try their hand at fusion. While it’s true that previous companies with big R+D sectors, like General Electric and General Atomic, had invested some in fusion programmes over the years and decades — none of them did so publically, as private startup enterprises, with an official stated aim of commercializing fusion power. We will have plenty more to say about the private companies that have attempted — and are currently attempting — to circumvent the international collaborative efforts and develop commercial fusion before anyone else manages it.
Next time, though, we’ll talk about the early big government attempts at laser and inertial confinement fusion — Project Janus, and Project Shiva.
======= Then, back to MCF, then NIF, then ITER, then Where Are We Now, then private companies, criticism of whole concept, etc ============
Nuclear Fusion: Inertial Confinement Fusion, and Two-Faced Destroyers
Last episode, we described how the invention of the laser gave scientists a new opportunity to pursue a totally different avenue to nuclear fusion.
To get nuclear fusion to release the energy that’s locked up in those nuclei, you need to heat plasma to extraordinarily high temperatures and create a very high density of nuclei. That way, the nuclei will have enough energy to overcome their mutual electrostatic repulsion when they collide — the temperature — and the density ensures that collisions will happen frequently enough to liberate more energy than you’re supplying.
In magnetic confinement fusion, the response of plasma to electromagnetic fields is used to attempt to contain the “burning” plasma for as long as possible. In a tokamak, the magnetic fields and the current that’s driven through the plasma also helps to compress and heat the plasma to hopefully obtain near-fusion conditions.
Inertial confinement fusion is more like detonating a tiny hydrogen bomb. Attempts to skilfully and dextrously hold the plasma as it burns are abandoned in favour of briefly attaining extremely high energies and densities in a controlled implosion. This idea, using a tiny pellet of fusion fuel that’s rapidly compressed to release energy, is the mechanism behind the hydrogen bomb — which uses x-rays from a fission bomb primary to compress that capsule. Prior to the 1960s, scientists looked for — and struggled to find — a way to compress the secondary fusion capsule enough to release net energy, but without literally detonating an atomic bomb, which tends to ruin your power plant and the surrounding neighbourhood.
When the laser was invented in the 1960s, scientists finally had that mechanism — something that could deliver a sizeable quantity of energy to a very tiny place — the fusion fuel capsule. Lasers could produce pressures, due to photons bouncing off the capsule, of billions of atmospheres. Furthermore, lasers could produce very collimated, spatially focused beams. To cause fusion to happen, you need almost perfect spherical symmetry in your compression — that’s why the Teller-Ulam bomb device needed a special design to reflect the X-rays onto the fusion pellet simultaneously. It was also partly why ideas like firing pellets into the capsule didn’t work, or using explosive shock-waves — getting them to simultaneously hit the capsule from all sides was very difficult. With lasers, where the timing and spatial frequency of the beam can be controlled exquisitely with mirrors, this became a real possibility.
The first people to generate thermonuclear neutrons this way we covered last week — it was the private company, KMS Fusion. We know now that their use of infrared lasers was never on track to produce net energy gains, but it sparked a great deal of interest for, and funding in the field. And, naturally, because inertial-confinement fusion involved creating incredibly powerful lasers, and studying how those lasers could be used to trigger a small-scale thermonuclear explosion… the centre of research for this, um, dual-use technology became the United States of America. In fairness, they were part of the tokamak revolution too, but ultimately the real big explodey giant lasers products that got an awful lot of funding for… potential… military applications? … were in the USA.
The first attempt, Janus, was more or less a catastrophic failure. The neodymium-silicon laser they were using was simply too powerful for the optical equipment — the array of lenses and mirrors that they needed to focus the beams onto the holhraum fuel capsule would quickly heat, melt, and distort the optical system that they were going through. Anyone who’s ever been in a darkroom fiddling around with laserbeams for ten hours for their undergraduate practicals — or, god forbid, for their graduate study — will sympathise with the fact that a tiny defect or imperfection in an optical system can result in a very big impact on the result you’re trying to achieve. In the case of the Janus laser, these little defects, burns, or misshapen parts of optical kit resulted in hot and cold spots in the final beam. This would of course prevent the fuel capsule from being symmetrically illuminated — but that wasn’t the worst of the problems. What started as a slightly hotter spot of the laser — a zone where more photons were impacting — could quickly become exaggerated as the laserbeam passed through more and more mirrors, lenses, and other optical components. These hot-spots could totally destroy optical equipment; nearly every time the Janus laser was turned on, it would destroy some valuable component of kit somewhere along the line, rendering the whole apparatus useless. These nonlinear optical effects, as they’re called, put paid to Janus. These effects became so severe after just the first few amplification stages of early lasers, that it was seen as essentially impossible to exceed the gigawatt level for ICF lasers without destroying the laser itself after just a few shots. Even though the machine’s laser pulses only released about ten joules of energy a pop — around the energy you’d use to lift a bag of sugar a metre into the air (where it would presumably then be useful as an offensive weapon against unknown assailants) — when concentrated on small enough spots, and in tiny amounts of time, it was more than sufficient to wreck the equipment. There’s a picture of Janus from how it looked in 1975 that I’ll put up on Twitter @physicspod with this episode so that you can see what it looked like; an unwieldy beast.
The next-generation machine they used was called Argus. The plan here was to make the apparatus much, much larger. Janus could fit inside a fairly large room — in fact, it’s not too dissimilar from many lab setups that I’ve seen. Argus was more like the size of a sports hall. Why so large? The beamlines for the lasers involved had to be very long. The purpose of all of these lenses and mirrors in the apparatus is to amplify the beam, shape it to your own desires. The laser itself produces a great many coherent photons, but to focus them down to the size of the capsule and attain the concentrated power that’s required for inertial confinement fusion to be feasible — this requires many amplification stages.
By making the beamline longer, it was possible to add an extra “spatial filtering” stage after each amplification stage. These filtering stages were pretty simple — they essentially just involved passing the laserbeam through an aperture the size of a pinhole after each amplification. That way, the hot and cold spots, the bits of the laser that were misfiring or ending up in the wrong directions, were carefully filtered out. You can use the same thing on all kinds of different optical imaging as a way of getting rid of features that you don’t want — in this case, any imperfections in a carefully collimated beam. This technique had its downsides — because you were filtering out a great deal of the power that was supplied to the laser in these filters, it meant you needed a stronger laser to supply the same energy to the fuel capsule. But since the alternative was totally destroying your laser kit and apparatus, this was certainly preferable to that.
Whenever you talk to someone who works on these big lasers, you’ll understand that they’re most keen to talk about the power of their laser in terms of — well — power. Power is just the rate of energy release — energy divided by time, if you like. So you can get UNLIMITED POWERRRRRRRRRRR… well, very high power… either by releasing a huge amount of energy over a reasonable amount of time, or a reasonable amount of energy over a tiny amount of time.
So it’s technically true that this Argus laser operated at a stunning 4TW of power way back in 1976. 4 terawatts, for context, is comparable to the entire world’s capacity to generate electricity here in 2018, which is around 6TW.
So yeah, saying that your laser operates with power equivalent to every power station in the world operating at maximum capacity sounds very impressive, and gives you some mad scientist street cred. But admitting that it only operates at this power for a matter of picoseconds — trillionths of a second — and, hence, is only releasing a few joules, or the amount of energy released when you drop a tomato on the ground (or, if you prefer, 1/60th of the energy you’re constantly radiating away each second just by sitting still)… suddenly it sounds a lot less like a giant death ray and a lot less impressive. So, if a laser scientist ever quotes you the power of their laser, do ask them how long they can keep it up for.
Laser braggadocio aside, the Argus laser was capable of producing bursts of fusion neutrons — like the ZETA experiment did for pinches before it. Argus was designed primarily to characterize large laser beamlines and laser-target interactions, there was no attempt to actually achieve the fusion ignition state in the device as this was understood to be impossible at the energies Argus was capable of delivering. Perhaps learning from the mistakes of experiments like ZETA, where scientists mistakenly thought they had achieved fusion when they had not due to an inability to understand the plasma, the main purpose of Argus was to develop techniques to measure the implosion of the fusion capsule and the interaction of the laser and the target. Even as a test device, it was performing well. But anyone who’s been paying attention to the history of fusion over our saga so far will anticipate what happened next.
The real problem with Argus — and the new biggest headache for inertial confinement fusion — was the dang electrons. The plan is to irradiate — either the fusion target, or a hohlraum that surrounds it — with intense light, with the ultimate aim being to heat and compress the nuclei in the plasma. The problem is that when you shine those lasers onto matter, they heat up the electrons first. The light, densely-charged electrons interact more strongly with the lasers than the slow, heavy, neutron-laden nuclei. These hot electrons buzz about and reduce the production of the hard X-rays that will compress the fusion reactor, by colliding with each other and photons and drawing energy away from the nuclei. Worse still, they would generally get so hot that the target exploded before the nuclei had sufficient time to collide with each other and the photons and warm themselves up. Hot electrons and cold nuclei is no way to make fusion happen: it’s just an incredibly efficient way to blow up a tiny capsule. And, if the electrons are preferentially heated relative to the nuclei, you will blow up that tiny capsule far faster than you’ll ignite a working fusion fuel.
As with all fusion experiments, we shouldn’t be too narrow-minded about whether they “succeeded” or “failed.” If fusion ever becomes commercially viable, all of these roadblocks that were hit along the way by various experiments will be seen as valuable discoveries on the way to finding something that worked. So Argus was successful at eliminating the problems of its predecessor with the method of spatial filtering, and it allowed the targets to be heated up to unprecedented temperatures: it was never really designed to ignite a fusion reaction that was supposed to produce more energy than it used, but it did help develop x-ray diagnostic cameras that could view the hot plasma that was being produced in these targets.
Future experiments would attempt to improve on the hot electron problem by changing the wavelength of the light. If you increase the frequency, and hence the energy, the wavelength of the laser light decreases and you can hope to heat the ions slightly more preferentially than the electrons. They were able to do this by passing the the laser through certain crystals that could double or triple the frequency of the beam. These crystals are called non-linear optical media — they don’t just refract the light, but instead — depending on the direction of the photon — it picks up a different phase shift. So it’s almost analogous to water waves being forced to move at different frequencies and interfering with each other — the result can be a wave that’s made up of a sum of the original frequency, plus doubled or tripled frequency light. Again, for those trying to design an inertial confinement fusion device, these huge nonlinear crystals added to the expense, the energy losses, the power of the laser needed, and the complexity of that beamline.
Nevertheless, the results were promising enough that scientists at Livermore were constructing an even larger laser. For a cool $25m in 1970s dollars, they produced a device with twenty separate beamlines, twenty separate arms of the laser that would irradiate the target from all sides with 30m long arrays of optical equipment, fed by a powerful laser. Nowadays, inertial confinement fusion scientists will tell you that the device — which they called Shiva, after the many-armed Hindu god — was also just a prototype, but it still wasn’t known what conditions would be required to achieve ignition with inertial confinement fusion. Shiva would prove that any calculations that suggested this design could produce more energy than it required were wrong — by a factor of ten thousand.
The problem was — you guessed it — a new kind of plasma instability that was discovered. Charles Seife has by far the best explanation of this in his book, The Sun In A Bottle, which you should all read: I’m going to borrow part of that explanation here.
Imagine filling a glass to the brim with water, and then tipping it upside-down. The water falls out, right? You don’t need a physics degree — or even an understanding of Newton’s laws of gravity — to explain or predict that.
Except that it is a little more complicated than that. Atmospheric pressure arises due to particles from the atmosphere smashing into us from all directions. It’s around 101,000 Pascals, which means every square meter of surface area has a force of 101,000N pushing down on it due to that atmospheric pressure; that’s equivalent to a weight of around 10,000kg. Luckily, we are used to this balance of internal and external pressures, and don’t feel the immense weight of the atmosphere most of the time (except, maybe, Monday mornings.)
But this atmospheric pressure pushes in all directions. It’s more than enough to keep the water in the glass when you turn the glass upside-down. In fact, under certain circumstances, it will. If you fill a glass to the rim with water, hold a smooth piece of cardboard over that rim, and then tip the glass over — try it outside or over a sink — if you then carefully let go of the cardboard, you’ll see that the water stays in the glass. The cardboard clearly isn’t holding it back; instead, it’s supported by air pressure. According to Seife’s calculations, you’d need a glass of water thirty feet tall for its weight to overcome the upward-pushing atmospheric pressure.
Which then raises a very serious question: if there’s enough force to support a huge column of water pushing upwards on the water in the glass, why does it fall outwards when you turn it upside-down?
The answer is the Rayleigh-Taylor instability, very familiar to those who have listened to tales of the instabilities that effect plasmas in the past. When you have a boundary between a dense and a less dense liquid, it’s inherently an unstable situation. A less dense fluid, air, is pushing on a denser fluid, water. If there are any slight imperfections or deviations along that surface, those bumps will get bigger and bigger. Regardless of how careful you are, there will always be some imperfections that can grow rapidly. Soon enough, big tendrils of water will start forming, breaking off and falling down, and eventually the entire glass of water rains down in the familiar spattering, sputtering manner.
The cardboard is a solid; it’s held together by stronger bonds than the fluid, and imperfections in the surface aren’t free to flow or grow. So air in contact with cardboard, or water in contact with cardboard, is a stable situation. This, you can handle — and so, if done carefully, you can keep deviations from forming at all and there’s no Rayleigh-Taylor instability.
Trying to get laser confinement fusion to work is like trying to keep the water in the inverted glass — only without the cardboard. That’s because you’re attempting to compress the deuterium-tritium capsule with something that’s less dense. Inevitably, even before you reach fusion conditions, the deuterium-tritium fuel will be far denser than whatever you’re trying to use to compress it — photons, or the hot atoms from the holhraum capsule that’s collapsing to compress the fuel pellet. Since you’re trying to compress a dense substance with one that’s less dense, you get the exact same instabilities as you find in a glass of water when it’s turned up-side down.
If there’s the slightest imperfection, dent, divot, hole or whatever on the surface of that compressing fuel capsule — or if it forms as the capsule compresses — the perfectly round sphere of fuel that you build quickly becomes spiky, with long tendrils and fingers that extend outwards. This isn’t what you want; you want a symmetrical collapse that nicely compresses and contains the plasma fuel at high densities and temperatures. Instead, what you get is Rayleigh-Taylor tendrils that allow the plasma to escape and cool before it reaches fusion conditions.
To have any hope of achieving that, you need to have a uniformly heated target — no hot and cold spots, no areas of underdensity and overdensity that will form these tendrils and cause the fuel capsule to break apart before it can be compressed to fusion densities. But even though the Shiva device illuminated the tiny sphere of fuel from twenty different directions, the compression still wasn’t uniform enough to avoid hotspots where the lasers hit the pellet. This was why, after Shiva, much of the efforts focused on indirect drive — heating a capsule (more or less) uniformly, which then collapses down onto the fuel pellet. But even this technique wasn’t enough to overcome the Rayleigh-Taylor instabilities entirely. And, as in the case with magnetic confinement fusion, every new instability made the calculations worse for the fusion scientists and engineers. Devices performed many times less well in practice than they did in theory.
Nevertheless, despite growing concerns around this Rayleigh-Taylor instability, the mood in the inertial confinement fusion community was optimistic — or, at least, this was what they projected to the public. Here’s a press-release from shortly after the Shiva laser began reporting experimental results, in 1979.
“In recent months, the 20‐armed Shiva laser system at the Lawrence Livermore Laboratory has attained a significant milestone on the road to the development of a laser‐fusion reactor. The Livermore group has reported that with target pellets of classified design Shiva has driven the deuterium–tritium fuel inside the pellets to between 50 and 100 times its liquid density. (With unclassified ablative targets they report achieving 10 to 20 times liquid density.) One hundred times liquid density is only an order of magnitude short of the densities that will be needed to achieve “scientific break‐even.” This goal, namely the release of as much fusion energy as the lasers deliver to the target (or the somewhat more modest goal of thermonuclear ignition), may well be achieved by Nova, the next generation laser system at Livermore, on which construction began in May.”
Yes, that’s right: it will come as no surprise that the solution was to build another, larger machine that would hopefully overcome the problems associated with the last generation of devices.
All of this is to say that, by the time the $200m laser confinement fusion research facility — called Nova — was being constructed, at the start of the 1980s, it was no longer the case that inertial confinement fusion was going to be this new, bright idea that could circumvent all of the complexities of containing plasma in a tokamak with a quick, cheap device that could act like a tiny atomic bomb. Nor was it the case that the new and exciting invention of lasers made the technology easy to develop. Instead of a young, upstart field that took over from magnetic confinement fusion, inertial confinement fusion followed a very similar trajectory: initial optimism, followed by failures and instabilities, followed by renewed efforts with ever-larger and ever more complex machines. Individual scientists would, of course, have their own reasons for preferring inertial confinement fusion or magnetic confinement fusion. ICF was always more popular in the United States than anywhere else, in part because of the large amounts of military funding it could often secure, due to the dual-use nature and weapons potential for these vast lasers and controlled explosions compared to tokamaks, which are far more difficult to weaponize in the same way. From hereon out, the two trajectories of fusion would evolve in parallel; two efforts with two quite different central ideas for achieving fusion, but, ultimately, so far — with similar results.
Nuclear Fusion: Simple Engineering Problems?
Welcome back to the latest episode in our nuclear fusion megaseries.
Are you all still with me? You know, I published an article recently over on Singularity Hub, where I write about science and technology, about a new spin-out startup out of MIT that was hoping to make nuclear fusion a reality in the next 15–20 years. Naturally, the comments I got were mostly “We’ve heard that one before.” If nothing else, this series demonstrates that — yes, I, too, have heard that one before.
The title for this episode is inspired by a quote from Dr Michio Kaku, who writes wonderful science communication books and built a particle accelerator in his garage as a teenager. He said “What we usually considered as impossible are simply engineering problems. There’s no law of physics preventing them.” I think this rather summarises the optimism that goes into fusion research and design; but in this episode, we’ll get into why “simple engineering problems” is a phrase that should turn your head.
There was another notable shift in the 1970s that perhaps speaks to the optimism in the magnetic confinement community. After the very earliest excitable experiments had died down, people working on fusion would always phrase their latest device as a step along the road to achieving fusion someday. They weren’t aiming to study the practicalities of harnessing fusion energy; they certainly weren’t trying to build a working power plant. Instead, each new machine was an attempt to learn something more about fundamental plasma physics. They wanted a proof of concept — to prove that magnetic fields really could confine plasma for long enough to produce net power. After all — why bother building a reactor when you’re not even sure that you have a power source?
In the early 1970s, more attention began to be paid to reactor studies — trying to work out how to harness the energy produced by nuclear fusion, all in one system. At the start of the decade, a conference was held at Culham in Oxford to discuss reactors.
In essence, it seems like a fairly simple proposition. In a nuclear fission reactor, what you have is effectively hot neutrons and radiation produced when the uranium atoms split and undergo a chain reaction. You pump coolant through the reactor, which heats up, generates steam, and that steam then spins a turbine. So far, so good.
Nuclear fusion should be a fairly similar proposition. Once you’ve got the plasma burning, it will emit energy as the nuclei fuse together — mostly in the form of extremely hot neutrons. You’d hope that a new heat source like this could more or less seamlessly replace heat from burning coal, or oil, or nuclear fission reactions. But it’s not that simple.
For a start, power plants are not particularly efficient. In your average fossil fuel power plant, less than a third of the heat energy released when the coal or oil or natural gas is burned ends up as useful electricity. These are plants that have been optimised to within an inch of their lives, designed and improved for decades, but you can’t beat the laws of thermodynamics and it’s very difficult to achieve extremely high levels of efficiency.
In some ways, though, it’s not a huge problem for fossil fuel and nuclear fission plants. If you need more electricity to be produced, you can simply build more power plants, or burn more fuel.
But fusion reactors obviously require a certain amount of energy to get going in the first place. What if it works out that, with your current design, the maximum amount of energy you can produce is three times what you put in — but then, in that “simple, tried and tested” conversion stage from heat to electricity, you lose two thirds of the energy? Then, since you’re operating the magnets and heating the plasma with electricity, you’re not really gaining anything at all: you’re just burning fuel to send electricity on a majestic loop that allows it to burn fuel to produce that same electricity. Which is a fun novelty, I suppose, but hardly “limitless, clean energy, too cheap to meter.”
There is, of course, another problem. When I casually say “hot particles”, I mean neutrons that are flying off the deuterium-tritium fusion with an energy of 14.1MeV. That’s 14.1 mega-electron volts — equivalent to the energy released if 30-odd pairs of electrons and positrons annihilated each other. In terms of a temperature equivalent, those neutrons are 160 billion degrees kelvin. They’re moving at around a hundred million miles an hour. And, what’s more, because they’re electrically neutral, you can’t use electrical or magnetic fields to easily slow them down. They like to crash into and upset the nuclei of atoms. Anything they crash into quickly becomes radioactive — which means that fusion plants do, in fact, produce radioactive waste. Instead of waste fuel, however, it’s heavily irradiated bits of the plant itself that will likely periodically have to be disposed of before they break entirely. But if you want to harness their energy, at some point, these scorching-hot neutrons — hotter than the heart of the Sun — will have to crash into something and give that energy up. Even today, a huge area of fusion research is simply trying to find materials that can withstand bombardment from these incredibly hot neutrons for any reasonable length of time.
So suffice it to say that, as well as it being an immense physics challenge to confine plasma with magnetic fields for a long enough time to generate energy, there are also immense fusion engineering and design challenges to overcome, if you want a reactor that works, that’s stable, and that will produce a decent amount of energy without constantly breaking down or requiring more energy than it generates.
As Joan Lisa Bromberg points out, there’s a different kind of engineering required to create something that’s purely experimental apparatus and something that’s designed to generate a steady stream of power. A key aspect to experimental design is to minimise the time between coming up with the idea for your experiment and being able to perform it — after all, there are competing groups, and if your kit takes ten years to build people might have answered the questions you had by some other means in the interim. So the experimental tokamaks and laser fusion setups that were created were built with simplicity in mind: you go in, run a few tests, get the results as quickly as possible. If the machine fails after a few years, or only gives reliable results every third run, it’s no great loss. The same can’t be said of a power plant: for it to be commercially viable, let alone competitive with other means of producing electricity, it needs to work for many years.
And fusion engineers would also start making demands of the physicists. If some brilliant plasma theoretician comes along and says you can obtain optimal fusion conditions by heating the plasma to a trillion degrees Celsius, and that happens to be hot enough that the braking radiation from the accelerating charged particles will melt your power plant, it is obviously not an optimal solution.
This isn’t just some idle complication, either: it’s not like you can wave your hands and say “Oh, by the time the fusion scientists have figured out how to get plasma to behave and release energy through fusion, the engineers will have figured out how to harness its energy — that’s the easy part.” Because it can fundamentally alter how feasible fusion is as a source of energy. We’re used to hearing this mantra that fusion is a source of “clean, safe, limitless energy”…
Seriously: just out of interest, I googled “clean, safe, limitless power”: pages upon pages of articles about nuclear fusion, from private companies like Tokamak Energy and universities like UC Berkley. “Star Power on Earth, A Limitless Clean Energy future”. “We are closer than ever to unlimited clean energy.” “Clean, limitless fusion energy is just 15 years away, say MIT.” (I even wrote an article myself for Singularity Hub about that latest spinout company from MIT, which acts as a very brief history of nuclear fusion.)
These claims sound wonderful, and they’re of course designed to ensure that people will continue to pour millions into your fusion research problem. And, in some ways, you can justify them. Fusion energy is “limitless”, in a sense. Fossil fuels are running out, and will be gone over a timescale of decades to centuries. The uranium that we mine for our fission reactors is also finite, formed by neutron star mergers billions of miles away and finding its way into the Earth’s crust.
Trying to figure out how much uranium we have left is very similar to trying to figure out how much coal or oil is left, and you’ll remember that in the TEOTWAWKI episode of Peak Oil, we pontificated about how difficult that is. Yes, it’s true that extraction technologies get better all the time, but it’s also true — regardless of how crazy BP economists might attempt to spin it — that this stuff is finite.
There are few arguments that annoy me more than this “Oh, oil will never run out because of supply and demand, we’ll always be able to sell the last drop of oil for some price, and new technologies will let us make more for any price…” The chief economist of BP argued this, saying that even the desk he was writing at could be turned into oil if someone was willing to pay the price. I fail to see how this is a valid argument in the same way that, if I was serving cake at a party and had too many guests, my reassurances that “I’ll just serve everyone a crumb and charge fifty quid each, and when we run out of crumbs I guess people can eat the table-cloth” is not a good argument. Okay, Thomas, lie down somewhere…
Based on your assumptions about how much better those techniques will get, and based on your assumptions about how much energy we’ll use in the future, and how much of a specific fuel we’ll use, you can come to all kinds of different numbers for how long the Uranium will last, just as you can for oil — or, okay Mr Economist man, when these resources become unaffordable or impractical rather than running out…
All I’ll say is that, at current rates of use, the proven recoverable uranium reserves will last for around 135 years — similar to some estimates for coal. On a timescale of a few centuries, fossil fuels will likely become impractical.
The claims that nuclear fusion is “limitless” by comparison rest on the idea that you’ll use deuterium as the main source of the fuel. Deuterium is just heavy hydrogen, and it turns out that 1/5000 seawater molecules are heavy water. So you can make all kinds of nice statements along the lines of “One litre of seawater contains more fuel than five hundred litres of petrol” or “the fuel for nuclear fusion is simply seawater” — yes, once you’ve actually extracted the deuterium. The idea that the fusion fuel is simple seawater is also behind some of the more absurd claims you’ll see that fusion energy is somehow also “free”. Fairly reputable websites will still publish this amazing claim, with TechRadar billing fusion as “unlimited free energy.” It’s free in the same way that a house is free to live in once you’ve bought it and paid your electricity bills for the month — or, perhaps more pertinently, in the same way that energy from a wind turbine or solar panel is free once you’ve built them and paid for their maintenance costs. Funnily enough, you still get charged for electricity from solar panels. The people currently building the multi-billion dollar ITER reactor will assure you that any energy it does eventually produce was certainly not free.
Nevertheless, assuming you can extract large amounts of deuterium from seawater, the theoretical maximum looks more like billions of years of current energy use, as opposed to the centuries provided by fossil fuels. That’s the same kind of timescale the sun is going to exist for before expanding to roast the Earth, so, to all intents and purposes, it’s as much as you could ever want.
But this assumes that you get the deuterium-deuterium cycle of fusion working. At present, the big fusion projects like ITER are going with deuterium-tritium fusion, which we explained in previous episodes is cheaper to achieve. Tritium has a half-life of ten years, which means that it doesn’t really exist in the Earth’s crust or seawater in the same way as deuterium does. The way that it’s currently produced for these fusion experiments is by creating it by bombarding lithium with neutrons. Luckily, as mentioned, the fusion reactor produces a great number of neutrons. You can hope to use lithium as a coolant for the reactor, swilling it around the fusion reactor chamber in liquid form. It gets bombarded with energetic neutrons, and splits into deuterium and a helium isotope. This is good, because it means you don’t need to expend too much additional energy to make the tritium — it’s a by-product of a clever reactor design. But, of course, then the inputs to your system are deuterium from seawater, and lithium from mining. Lithium is also a finite resource — and ever-more finite as lithium-ion batteries grow more popular. There might be 120–150 years of Lithium in the ground based on current consumption patterns.
Now, it’s true that there’s a great deal of lithium in seawater, and that extracting it is a holy grail of resource extraction science. For example, right now, China has more or less cornered the lithium market, buying up lots of shares in mines in South America. Western powers are looking to their own deposits to extract them, but large-scale lithium mining is pretty expensive and environmentally destructive by itself.
What’s more, the lithium breeder fusion reactor has its own issues — it’s not exactly self-sustaining. Daniel Jassby, who worked on nuclear fusion for more than 25 years, wrote an excellent article highlighting some of the technical shortfalls with these ideas in Bulletin of the Atomic Scientists. He points out:
“The tritium consumed in fusion can theoretically be fully regenerated in order to sustain the nuclear reactions. To accomplish this goal, a lithium-containing “blanket” must be placed around the plasma.
But there is a major difficulty: The lithium blanket can only partly surround the reactor, because of the gaps required for vacuum pumping, beam and fuel injection in magnetic confinement fusion reactors, and for driver beams and removal of target debris in inertial confinement reactors. Nevertheless, the most comprehensive analyses indicate that there can be up to a 15 percent surplus in regenerating tritium. But in practice, any surplus will be needed to accommodate the incomplete extraction and processing of the tritium bred in the blanket.
Replacing the burned-up tritium in a fusion reactor, however, addresses only a minor part of the all-important issue of replenishing the tritium fuel supply. Less than 10 percent of the injected fuel will actually be burned in a magnetic confinement fusion device before it escapes the reacting region. The vast majority of injected tritium must therefore be scavenged from the surfaces and interiors of the reactor’s myriad sub-systems and re-injected 10 to 20 times before it is completely burned. If only 1 percent of the unburned tritium is not recovered and re-injected, even the largest surplus in the lithium-blanket regeneration process cannot make up for the lost tritium. By way of comparison, in the two magnetic confinement fusion facilities where tritium has been used (Princeton’s Tokamak Fusion Test Reactor, and the Joint European Torus), approximately 10 percent of the injected tritium was never recovered.”
So, in practice, Jassby is pointing out that — even if your supply of lithium for the blanket is limitless — it can still only make up for a small amount of the tritium that’s being used. You need a really efficient system to recover the unburned tritium, injecting it into the reactor over and over again. Jassby doesn’t think this is really practical, so instead:
“To make up for the inevitable shortfalls in recovering unburned tritium for use as fuel in a fusion reactor, fission reactors must continue to be used to produce sufficient supplies of tritium — a situation which implies a perpetual dependence on fission reactors, with all their safety and nuclear proliferation problems. Because external tritium production is enormously expensive, it is likely instead that only fusion reactors fueled solely with deuterium can ever be practical from the viewpoint of fuel supply.”
Jassby’s might be a pessimistic view — one that relies on deuterium-only fusion as the only really scaleable solution, or else keeping nuclear fission reactors around to produce the desired tritium as fuel for fusion reactors.
But you can see that there are some pretty big potential asterisks to the idea that power from nuclear fusion is effectively limitless. It is, providing we develop new ways to extract the relevant fuels, or else master the more difficult form of nuclear fusion which could take even longer than getting deuterium-tritium fusion to work.
Another major downside that particularly bugs magnetic confinement fusion is in parasitic energy consumption. In other words, nuclear fusion reactors need power to work. The magnets that confine the plasma need to be cooled with liquid helium; you need to pump the coolant around; you need a good vacuum pump to evacuate the chamber where fusion is taking place; you need to process the tritium and deuterium to provide fuel for the planet; you need to air-condition the buildings, etc., etc. These energy cost takes place constantly, regardless of whether or not the plant is currently producing energy, so you can multiply it by whatever fraction of time your power plant is out for. And while the plant is running, in a tokamak, you heat the plasma by driving a current through it, which requires more energy.
The point here is not that you can’t ever reach breakeven. The point is that, with current physics and engineering knowledge, you need a huge device just to generate more energy than is consumed. Generating one megawatt more than you consume is obviously not economically feasible if your plant costs $20bn to build — so you need a really huge device to generate a substantial amount more energy than is consumed. According to Jassby, this means the smallest fusion reactor that’s commercially viable is probably around 1GW in size. And, indeed, if you look at the mainstream plans for tokamaks at present — ITER, which is supposed to be the first fusion reactor to produce more energy than it consumes, is slated to be 500MW. But DEMO, the hypothetical plant that follows ITER and is supposed to be the first practical working nuclear power plant, is slated to produce 25x the power required for breakeven, and exist on a scale of between 2–4GW of power. By comparison, the largest coal-fired powerplants in China and South Korea are between 4–6GW, and they are behemoths. So, in other words, the moment you decide to get power from nuclear fusion — unless you have some new technology that goes against the current ITER-DEMO orthodoxy — you’re committing to building one of the biggest power plants in the world, with some of the most expensive equipment, and a huge capital overhead. Unlike coal-fired power plants, you can’t build it in pieces and add more furnaces and generators. Unlike my beloved solar panels, you can’t just throw down a few dozen where you have some spare space for limited money and throw them into the grid, or even build them on the roof of your house. It’s all or nothing, and all is a lot.
Next time, we’ll go further into the depressing mire of reasons why fusion energy might not be “clean, limitless, and cheap” after all.
Nuclear Fusion: The Buzzkill Episode
Last episode, we got very cynical about the oft-stated claim that energy from nuclear fusion is in any sense “limitless”. Now, we’re going to get super cynical about some of the other things that people say about nuclear fusion.
You can apply a similar sceptical eye to the claims that nuclear fusion is “clean”. It’s perfectly true that fossil fuels produce carbon dioxide which leads to global warming, and fission power plants produce radioactive chunks from the reactions that power them. By comparison, the direct fusion reaction — deuterium plus tritium — produces a fast neutron and helium. So it’s true that fusion is clean in the sense that none of the direct products are radioactive… although the tritium fuel is radioactive.
But this conceals the fact that those neutrons bombard the reactor casing. Those highly energetic neutrons are like few other things we see on earth. You can’t slow them down or control their path with electric or magnetic fields. They crash into things and ionise them; they knock atoms out of their positions in the lattice; they can generate hydrogen and helium when they crash into the walls of reactor vessels, leading to harmful pockets of gas. The molten lithium in the breeder blanket can catch fire or explode, leading to damage to the reactor vessel. And this bombardment results in the material becoming radioactive. Eventually, as the material degrades under this neutron bombardment, it will need to be removed — and you end up with tonnes of heavy radioactive casing. It’s less radioactive than enriched uranium, or the waste output from fission power plants, but you still need to dispose of it somewhere, and there’s a far greater volume of waste produced by the fusion power plant. It may be less dangerous, in other words, but calling it clean is a bit of a stretch.
Jassby points out:
“Materials scientists are attempting to develop low-activation structural alloys that would allow discarded reactor materials to qualify as low-level radioactive waste that could be disposed of by shallow land burial. Even if such alloys do become available on a commercial scale, very few municipalities or counties are likely to accept landfills for low-level radioactive waste. There are only one or two repositories for such waste in every nation, which means that radioactive waste from fusion reactors would have to be transported across the country at great expense and safeguarded from diversion.”
Let’s also note that making these alloys — the kind that can stand up pretty well to intense, energetic neutron bombardment — is proving to be a very difficult task. These neutrons are travelling at significant fractions of the speed of light. And, short of fusion reactions, it’s extremely difficult to get neutrons of this energy to test your materials with. So it’s not just a matter of using some extremely clever tungsten alloy. If your material is going to be badly damaged by the neutron flux, then you’ll need to replace it more often: that’s more downtime for the reactor, more expense to replace the shielding, and more nuclear waste to dispose of.
Another advantage that’s often touted for nuclear fusion is that it’s safe. And it’s very true that you can’t really have a Chernobyl style disaster with a nuclear fusion reactor. A nuclear fission reaction is essentially an out-of-control chain reaction that’s damped down and harnessed; the failure mode can easily be explosive, spreading radioactive debris over a large area. Meanwhile, the fuel is partially-enriched uranium — with a little more enrichment, fuel-grade uranium becomes weapons-grade uranium, which is why there is a great deal of controversy over countries like Iran pursuing nuclear power for peaceful purposes.
Meanwhile, if a nuclear fusion power plant stops working, the worst that happens is that the plant, and possibly the immediately surrounding area, will be badly damaged. There’s no way to have a runaway fusion reaction; as soon as plasma confinement breaks, fusion will stop. As the last few episodes will have convinced you — if nothing else — attaining the conditions for fusion to work and produce energy is extremely difficult. Attaining the conditions for fission to produce an explosive amount of energy pretty much just involves letting a big lump of enriched uranium sit there.
There are some other worst-case scenario safety risks — maybe the vacuum fails and the superconducting magnets or the vessel explodes. This will destroy the power plant, but it doesn’t have the potential to spread dangerous fallout over a very large area in the same way as a nuclear fission power plant can.
However, Jassby once again notes that not all is rosy in terms of nuclear proliferation for fusion powerplants either. The concern that bad actors might pretend to be enriching uranium for peaceful purposes, but actually making bomb materials, plagues fission reactors. But in fusion reactors, the fast neutrons in turn present a problem; if you throw uranium-238 or uranium oxide, both of which are much easier to get than enriched uranium, into a fusion reactor’s fast neutrons — you produce plutonium-239. Jassby calculates that even a small 50MW test deuterium reactor could produce up to 3kg of Plutonium-239 a year. Plutonium-239 is used even more widely in nuclear weapons than highly enriched Uranium, because it’s easier to get a critical mass for bomb construction. It’s obviously difficult to get precise figures on how much Pu-239 you need for a bomb, and my Google search history is slightly more incriminating after writing this script. However, suffice it to say, that’s probably enough Pu-239 for at least one Hiroshima-style bomb; so nuclear fusion plants may need to be inspected in a similar way to fission plants.
Waste disposal is a huge part of why nuclear fission reactors are so expensive to run — in fact, according to Lazard, nuclear fission power plants are already more expensive than natural gas, coal, wind, biomass, geothermal, solar thermal and solar photovoltaic plants. In recent years, fission power plants — like the Hinkley Point C power plant in the UK — have been over budget and taken far longer to construct than initially planned.
One can argue that nuclear fission hasn’t received the research and development funding that it deserves. But it’s more than fifty years since the first power plants opened with a promise that they would be “too cheap to meter”. You look at the economics of nuclear fission — and realise that most of the disadvantages due to the overhead of construction costs for the plant and waste disposal might be just as bad for nuclear fusion. Fusion advocates reassure you that, over time, all technologies get cheaper, more reliable, the kinks get ironed out and so on… yet this hasn’t been the case for nuclear fission in practice over these many years.
The final point I’ll make when it comes to practical, economically viable fusion is that for anyone to practically invest in it, it has to compete with what already exists. It’s true that fusion has some advantages over solar power — you can put a fusion reactor in places where the sun doesn’t shine, it takes up less space, and you don’t need energy storage systems to overcome the intermittency problem. But if a fusion reactor costs ten times more than the equivalent in solar panels, at some point it becomes cheaper just to go with solar and suck up the cost of storing the energy, or transmitting it across long distances to where it’s most needed, or buying the land. This is before we even get into the fact that it’s possible to build yourself a fairly small solar farm if you want: anyone who wants to get into the nuclear fusion energy business better be capable of throwing down billions of dollars in the initial overhead before they can even begin to generate energy and realise a profit.
Currently, in San Luis Abispo, the Topaz Solar Farm is generating 500MW of power. Because of the winter and because of the night-time, the capacity factor is around 23% — in other words, divided over a full year, it generates 125MW of power, and you’d need four of these operating to generate the equivalent of a 500MW power plant that’s always on. That cost $2bn. ITER, which is a demonstration power plant that will produce 500MW of power for a few minutes at a time — if it works — has already cost $20bn. So effectively, you could build four Topaz Solar Farms already for less than half the price of ITER, and still have plenty of cash to spend on storage and transmission solutions. In other words, solar + storage is probably already cheaper than the optimistic cost estimates for fusion.
And, of course, fusion has to compete with what the cost of electricity is *going* to be when fusion works — not the cost of electricity as it is now. Solar panel prices have been falling precipitously. In the developing world, where labour costs are cheaper and the sunshine more abundant than even in California, solar just keeps getting better as an option. There’s an even newer plant in Kamuthi, in India, with a capacity of 650MW that only cost $700m to build. The Longyangxia Dam solar park in China has a capacity of 850MW and cost around $1bn to build.
These plants have already been constructed. Meanwhile, consider a plan for Korea’s own DEMO power plant from fusion. The head of Korea’s ITER agency was interviewed, and said:
“Can you provide an estimate of K-DEMO’s construction cost? How does it compare to the ~EUR 13 billion ITER price tag?
At present, it is premature to estimate K-DEMOs cost.”
The final note I’ll have to say on the subject of economic practicality involves a little back-of-the-envelope calculation that I found in an eye-opening blog post at matter2energy, by renewable energy blogger Maury Markowitz.
Obviously, like everything above, this can all depend on your assumptions — maybe solar stops getting cheaper, maybe someone finds a way to make a smaller fusion power plant, maybe storing energy from solar ends up being far more expensive or environmentally damaging than fusion power. Maury clearly has *opinions* about fusion, so take it with a grain of salt. Nevertheless, the logic is compelling, and his style is pretty inimitable, so I’ll quote.
“There are three groups involved in building a power plant, and a design has to make all three happy.
First, and most obvious, is the power company. They really don’t care about technology. Their only concern number called the Levelized Cost of Electricity. LCoE basically tells you how much you have to charge your customers for the power generated by the plant. That better be lower than the customers can get elsewhere, or there’s no point building a plant.
Then there’s the engineering firm that actually builds the plant. They don’t give a crap about the technology or the LCoE, the only thing they care about is making a profit building it. Is this a machine that lots of people have built before and is well understood? No problem. A new concept that no one really knows much about? You’re going to have to pay them a lot more.
And finally, and most important, are the bankers. They don’t give a crap about the power company’s profitability or the construction company’s, they only care about their profitability. And that is 100% based on the interest they can charge the power company and the risk that the company will default.
Right now the nuclear power industry is dying a horrible death everywhere in the western world. That’s because the bankers won’t pay for it. There is no other reason: it’s not because of tree-huggers or a global conspiracy of anti-nuclear government agencies. It’s the bankers.
You can’t blame them. A fission reactor at an existing site takes 4 to 6 years to build, during which time you make no money. Reactors at new sites generally take 10 to 12 years. Meanwhile, wind turbines go from the first sketch on a napkin to on the grid in 18 months or less. Consider the decision that a banker has to make when presented with two pitches:
- I want 10 million for 18 months, after that I’ll pay you 6%
- I want 25 billion for 5 years, after that I’ll pay you 8%
Option 1 gets the money every time. Not in theory. This is clearly what is happening in the real world.
You can argue the technical superiority of fission over wind all you want — in fact, it’s pretty much all true. It is a fact that wind cannot be dispatched while nuclear has a capacity factor around 90% and provides all sorts of baseload. It is a fact that nuclear takes up less land than the equivalent in windmills. Add any of the other advantages you’ve heard, they’re probably true too.
Here’s the problem with all of those arguments: the bank doesn’t give a crap.
So the places that are building nukes are invariably where the local government is willing to put up the money, generally interest-free. We have new reactors in China and Korea, and everyone else is doing basically nothing. Actually, in the US all the money is backed by the government, and the companies have ignored it anyway. It’s just too expensive and economically risky.”
Maury goes on to fly the flag for renewables by pointing out that there’s a reason the prices for solar and wind are falling. There’s a reason that they will fall below even the prices for the coal, oil and natural gas plants that have had decades of being the world’s dominant energy supply to refine and cost-cut and corner the market. Not only do you not have to worry about supplying them with fuel that needs to be dug out of the ground, refined, and transported, but solar and wind essentially produce electricity directly. Solar panels produce a voltage when photons hit them. Wind directly spins the turbine that generates electricity. But nuclear power plants and fossil fuel power plants generate heat, and that heat is converted into electricity by a heat engine, with all the thermodynamic inefficiencies and complex parts that implies. Heat engine power plants are often more complicated than a wind turbine or solar panel that can more or less be a self-contained unit that generates power. For that reason, they’re often more expensive. Indeed, Maury argues that if you look at a nuclear fission power plant, the reactor which actually generates the heat from nuclear fission is around 1/3 of the capital cost. The other 2/3 is for the heat-to-electricity side of the equation — which is independent of what actually generates the heat. So even if the fusion reaction side of the equation was effectively free, it still might end up being more expensive than a fleet of wind turbines.
All of this is not to say that fusion energy won’t ever work, or form part of the energy mix. But I will leave you with this rather damning back-of-the-envelope calculation.
The cost of the *concrete floor* for the room that ITER is in came in at around 15 cents per watt of power that it generates. Not the lithium, not the superconducting magnets, not the reactor vessel, but the actual floor of the building that contains it. Meanwhile, according to clean-technica, the cost of a solar panel today is 40 cents per watt. By 2040 — around the time that fusion scientists hope to have proven that ITER can generate power in deuterium-tritium reactions for 15 minutes at a time — industry experts project that it could be 21 cents per watt.
So yes, there are installation costs; yes, there are intermittency and storage problems for renewables; yes, you need to transmit the power from places with a lot of sunshine. But if you’re looking at a situation where the price of the concrete floor for your insanely complicated fusion reactor might be comparable to the price of the heart of the power plant for your solar panels, it’s hard to imagine that the problems associated with large-scale solar will be more expensive to solve than those associated with large-scale fusion power plants.
Indeed, as far back as 2006 — perhaps cynical after years of waiting for fusion — an old scientist called William Parkins published his estimate in a Science Magazine article entitled “Fusion power: Will it ever come?”
“Scaling of the construction costs from the Bechtel estimates suggests a total plant cost on the order of $15 billion, or $15, 000/kWe of plant rating. At a plant factor of 0.8 and total annual charges of 17% against the capital investment, these capital charges alone would contribute 36 cents to the cost of generating each kilowatt hour. This is far outside the competitive price range.”
Given that that levelized cost of electricity will already allow you to generate, according to Bloomberg New Energy Finance, onshore wind at 5.5 cents per kilowatt hour and solar photovoltaics for 7 cents per kilowatt hour — if this proves to be even remotely accurate, the construction costs alone already mean that fusion will struggle to compete with renewables. At this point, it’s really a race between whether you think storing renewable energy will cost more than five times as much as generating it. A fusion skeptic would certainly argue that the amount of progress you need to assume in energy storage, for renewables + storage to win, is far, far less than the amount of progress that fusion would need to make to become economically competitive. And I think such a skeptic may well be right. Such a skeptic might say that we already have the only fusion reactor that we’d ever need — that it works for free, requires no maintenance, and rises conveniently for us every morning. Again, when you look at the economics of fusion and the still-existing “engineering problems” that need to be resolved, it can seem like the skeptics are right. And some of those real skeptics would argue that ITER is not so much a viable route towards power generation as it is a rather expensive and pretty-well funded plasma physics experiment — more of a Large Hadron Collider for plasmas than anything else.
So after all this time detailing the marvellous and fascinating history of efforts to put the sun in a bottle, efforts towards nuclear fusion, I thought it would be good to give a countervailing perspective and pour a whole bunch of, if not cold water, then cold liquid lithium, onto the flames. It would be terribly unromantic if this great scientific odyssey, these decades of striving and ingenuity, the sheer magnificence of mimicking the nuclear process that allowed the atoms that make us up to be formed — was completed, only to turn out to be ruined by something as mundane as money. But sometimes it doesn’t seem like the world we live in is all that romantic.
Fusion may work. Fusion may even, given a certain set of assumptions, be necessary. And if ITER works and the price falls, or China pulls off a miracle, or the little, dreamy-eyed, ambitious start-ups find a way to make smaller fusion reactors — or humanity turns out to be really rubbish at storing energy — it might still be our saving grace. But let’s continue along our starry-eyed quest to put stars in bottles with a slight note of caution. Anyone who tells you that fusion energy will be 100% limitless, clean, or safe doesn’t quite know the full story. And anyone who has the temerity to call it “free” or even “cheap” given what we know so far should probably be stapled to the reactor wall.
But, as the late great Elliott Smith suggested, perhaps a distorted reality is now part of a necessity to be free. So, armed with our newfound cynicism and a sneaking suspicion that maybe the whole damned insane enterprise is really doomed after all — although, perhaps, just perhaps it could be our salvation — we’ll continue the journey through the twists and turns in the history of nuclear fusion, from the good, the bad, to the downright insane. After all, perhaps this was all far too cynical about the prospects for commercial fusion to power your home. Maybe making a miniature sun to boil the kettle is not too fanciful a prospect. And isn’t it still a beautiful story, all the same? If we don’t have dreams, my dears, what can we really say we do have?
— — — — -
More from Seife and Bromberg books
_ eventually lead on to how the many smaller tokamaks discovered scaling laws, description of the scaling law that was discovered, and the concentration of resources on three big tokamaks; JET in Culham, TFTR in the United States, and JT-60 in Japan.
Nuclear Fusion: The Big Three Tokamaks
In the last couple of episodes, we discussed the first few fusion reactor studies: people beginning to really grapple with the practicalities of not just getting plasma to behave long enough to generate energy from nuclear fusion, but actually to harness that power in a way that was practical and cost-competitive with other sources of energy. As we discussed, the engineering problems multiply many times over, the closer you look at the project. None of them really mean that fusion is totally impossible, or even necessarily worse than other sources of energy once it’s developed — but it’s indicative that the promise of “limitless, cheap, clean, practically free” energy that is constantly dangled surrounding fusion is really a very long way from reality. Perhaps, if you’re being cynical, utterly divorced from reality. But frankly, I regret marrying reality in the first place, so we’ll push on.
Yet in the 1970s and 1980s, there was renewed impetus behind nuclear fusion — in the US and Europe especially, it was motivated by the oil shocks. Realising that perhaps dependence on foreign oil and finite fossil fuels was not necessarily the best way to go — a realisation that I’m sure will eventually sink in globally — funding for alternative nuclear energies and other renewables increased. Inertial confinement fusion sprang up as a new arm of fusion, with first Janus and then Shiva gradually building larger and larger machines to try to force fusion to happen with a suddenly, explosive, compression by lasers. Meanwhile, the whole world of magnetic confinement fusion had gone mad for the tokamak and more or less abandoned their earlier pinch and stellerator ideas entirely, drawn in by the increased confinement times and temperatures that the Russians had been achieving with their tokamaks.
Where we last left off, dozens of tokamaks were being designed and built across the world, including a fanciful list of names — the Texas Tokamak, the Doublet Tokamak, Altercor, Ormak, the Symmetrical Tokamak, to name but a few. This wasn’t entirely an exercise in scientific competition, with different institutions vying to be the first to attain nuclear fusion. Nor was it entirely duplication. Instead, the devices were exploring different parameters for plasma physics. One of the key realisations early on in the tokamak’s life was that the cross-section of the plasma could make a difference — plasmas that were shaped like a half-moon, or with a D cross-section, could perform better than perfect cylinders of plasma.
Although physicists had by now realised that plasma could behave in incredibly complicated and unexpected ways — and that its behaviour in magnetic fields could give rise to all kinds of instabilities and quirks that weren’t predicted by simply treating the plasma like a fluid, or like a collection of charged particles moving in electromagnetic fields — a full theory of how the plasmas in tokamaks operated, capable of predicting how they would respond to specific designs, was still elusive. As a matter of fact, many of the main theoreticians and fusion scientists disagreed about why tokamaks were more successful, or how they could be made more successful. The interpretation of the experimental results first obtained by the Russians, and then by later generations of tokamaks around the world, was highly controversial.
But a new set of theories did arise, with semi-empirical scaling laws gradually being derived. A scaling law is usually a simplified formula that roughly approximates the behaviour of a system based on how it depends on parameters that you vary. A great example is a charged particle in a magnetic field. We have Maxwell’s equations, and the Lorentz force law, so we can calculate exactly what the forces are on that charged particle and consequently figure out things like its trajectory and how it will accelerate. But let’s say you don’t have access to any of that — all you have is experimental results. With a little experimentation, you might soon realise that — when the charged particles orbit in circles in the magnetic field — the radius of the orbit is proportional to the speed of the particle, and inversely proportional to the magnetic field and its charge. That way, even though you may not know the underlying equations, you can still predict behaviour. The only problem is that — without knowing the underlying theory — you can’t know if your formula applies for all times. And, indeed, if you sped up that charged particle in a magnetic field until it was travelling at relativistic speeds — close to the speed of light — you’d need to change the formula and make a relativistic correction, and the radius would no longer be simply proportional to the speed.
Essentially, by designing different and varied tokamaks and running different experiments with them, the scientists were aiming to explore the parameter space of tokamak design; to try to figure out these scaling laws by having lots of different data points. Then, when the time came to build the big tokamak, they would hopefully know something about how it was going to perform in advance.
The kinds of parameters they were looking at related the confinement time of the plasma to a few other parameters. The magnetic field in the torus; the diameter of the Tokamak; the differently shaped cross-section; the current flowing through the plasma itself; and the aspect ratio — in other words, the size of the hole in the donut ring of steel and magnets that encloses the plasma in a tokamak. Over the years, several tokamaks would be built where the donut hole was extremely small — these are generally called spherical tokamaks. It was known at the time that the aspect ratio for the tokamak affected the performance of the plasma in a number of different ways; but it wasn’t necessarily clear which approach would succeed.
A big part of the reason that tokamak scientists were able to even consider fiddling about with each of these different parameters to see what might happen was because the diagnostic tools used had gotten much better. Rather than realising that your experiment had failed when plasma smashed into the walls of the device, or settling for a few photos of the instabilities as they writhed out of control, scientists could get a better sense of what was going on in different regions of the plasma, and at different times. This allowed them to change things like the aspect ratio or the plasma cross-section and see how the plasma behaved differently.
There was an initial burst of enthusiasm for tokamaks in the early 1970s — motivated because the technology was new to the West, because Jimmy Carter in America was keen on renewables, and because the oil crisis was motivating everyone to seek out alternative forms of energy.
But eventually this proliferation of dozens of different kinds of tokamaks was bound to peter out — particularly when it became clear to funders that they were plasma physics experiments rather than necessarily viable routes to fusion energy. This was particularly true in the US. Under Jimmy Carter, the fusion budget was nearly doubled from $400m to $800m and an official target was set to have an operating fusion demonstration plant by the year 2000. When the backlash came, alongside Ronald Reagan, these plans were abandoned, and the fusion budget was gradually chipped away at over the next few years.
Amazingly — and talk about mismanagement — Seife tells the story of a huge, $300m magnetic mirror project that was actually completely built before it was scrapped from the budget entirely.
In the 1970s, when magnetic fusion research was in this optimistic phase, there had been two projects designed. One was the Tandem Mirror Experiment, designed to be a small demonstration plant; the large-scale project was dubbed the Mirror Fusion Test Facility. The MFTF was being constructed throughout the 1980s, even as the political situation changed and funding began to dry up for magnetic confinement fusion. Meanwhile, experiments at the Tandem Mirror Experiment demo plant illustrated that confining the plasma may prove to be a far trickier prospect than they’d previously thought.
The leakiness of the magnetic bottle was the final straw for those in charge of the purse-strings, but even so, it’s jarring to consider how it ended.
The thing was so close to being switched on that they had already planned the dedication ceremony for the site, which went on as planned, even though the project had zero budget and would never be used. One attendee said they felt like they were attending a wake. Just like that, all of the scientific effort put into those projects, and the hundreds of millions of dollars went up in smoke. The ideas have been resurrected by Polywell, but decades later, and who can say if this precise design will ever work. You can still go online and see photos of the scientists involved, having backed the wrong horse, stood in front of their newly-constructed state-of-the-art fusion reactor, doomed to never be switched on.
Eventually, as the squeeze on funding deepened, all of those fancy and multivaried tokamak creations — the “Elmo Bumpy Torus”, the Impurity Studies Experiment, the Texas Tokamak, and any number of other magnetic confinement fusion projects were abandoned in favour of a single, behemothic project. Similar things happened in Japan and Europe, the other two major centres of magnetic fusion power at this time — a vast array of different projects, with a panoply of different approaches to magnetic confinement fusion, gradually gave way to one, big tokamak that sucked up all of the funding and expertise from the surrounding area.
In Japan, the tokamak was the JT-60. In the USA, the tokamak was called the Tokamak Fusion Test Reactor. And, in Europe, the tokamak was built in Culham laboratories in Oxfordshire, and it was called the Joint European Torus — JET.
In some ways, these three devices were all very similar approaches to magnetic confinement fusion — build a big tokamak, larger than anything that’s previously been constructed. JET would be able to induce larger currents in its plasma to exploit the pinch compression effect, and TFTR would have stronger magnetic fields than the other two, but in broad strokes the three big tokamaks were similar projects.
And they had similar aims, too. After all, this was not the 1950s any more — the glorious, early days of fusion where no politicians or funders had heard the story before, and everyone was happy to throw money at a problem that seemed close to solution. At this point, you needed to actually demonstrate that your machine had accomplished something tangible. So each of these tokamaks was set up to use deuterium-tritium fuel, which for its low energy barrier was the most feasibly attainable fusion reaction in the near future, and they were designed to reach the symbolic goal of breakeven. Saying “this new device will make great contributions to our understanding of plasma physics, enabling future devices to generate power” is only going to work so many times. Saying “this device will generate as much power as it requires to run, and from there it’s only a hop skip and jump to fusion generators” is more persuasive… no matter what pesky practicalities you’re attempting to mask.
So this is the trend in the 70s and 80s for magnetic confinement fusion; gradually, alternative approaches like renewed efforts at pinch devices, stellarators, and magnetic mirrors lose favour and funding, as any setbacks just emphasise the dominance of the tokamak as a model for magnetic fusion. Nowadays, in a world where tokamaks have dominated for thirty years or more, a lot of these ideas are being picked up again, with private companies experimenting with pinch devices and several new stellarators under construction. But it was the tokamak that seemed most likely to achieve this important symbolic goal of energy breakeven, and it became increasingly clear that you weren’t going to get the funding to build half a dozen different configurations of tokamak that could do that in parallel, so the world ended up with three.
There were reasons to think that tokamaks might be able to achieve breakeven. Experiments in the US with the previous leading generation of tokamaks — the Princeton Large Torus, for example, which was the hastily converted stellarators that followed on from Russian designs in the late 60s and early 70s — had been promising. They had come up with a new method of heating the plasma, above and beyond the ohmic heating from pumping a current through it. Neutral beam heating, as it was called, involved shooting beams of neutral particles and atoms into the plasma. As they are neutral, they can collide with the plasma without being deflected by the magnetic field — and without creating plasma instabilities by interacting with the plasma along the way, as might happen if you bombarded it with protons or something — then, when they collide, they become ionised and are confined like other charged particles in the plasma for a little while, transferring most of their heat and momentum for efficient plasma heating. This had allowed the Princeton Large Torus to heat its plasma up to 60 million degrees Kelvin, which was considered to be just below the temperature threshold required for breakeven in a fusion reactor.
Of course, fusion alone is not sufficient to reach breakeven — you’re dependent on this pesky fusion triple product, the product of the confinement time, the plasma density, and the temperature of the plasma. But the scientists were pretty sure that, with a larger machine, they’d be able to bring the magnetic fields and the density up. The TFTR was more than twice as big as its competitors, the JT-60 and the JET.
Yet ultimately, TFTR would not achieve break-even. Instead, it would sit there, quietly churning away over the next few decades, breaking record after record without ever quite doing what it was designed to do. Charles Seife, as ever, paints a grim picture of the lab in the 1990s:
“The first thing that would strike a visitor to Princeton in the 1990s would be the circles. A large ring-shaped desk; a circular sofa surrounding a toroidal model of the TFTR; a semi-circular auditorium, and the countless loops of previous tokamaks displayed in the waiting room. The second thing that would strike a visitor was the air of quiet desperation that hung about the lab. The staff was trying to sell fusion to the public, and while TFTR was setting temperature records almost daily, nobody seemed to be buying. Budgets were still dropping, and the taxpayer didn’t protest.”
Ultimately, the story of TFTR — which set itself the goal of breakeven — was always a slight disappointment. When I come to the end of Joan Lisa Bromberg’s excellent book on early fusion history, which was written in 1982 — the year that TFTR started operations — she notes with a hint of optimism that TFTR was “poised to cross the break-even line” in that decade. And, initially, in the 1980s, results seemed promising. In April 1986, for example, they achieved a fusion triple product of 1.5 * 10¹⁴ seconds per cubic centimetre. This was substantially higher than the fusion triple product that would be required for breakeven, and close to that which had been calculated for commercial fusion — but the temperature wasn’t high enough for the plasma to actually ignite. Just a few months later, in July of 1986, running the tokamak in a different configuration — aiming to maximise temperature — led to a world-record temperature of 200 million kelvin, which was the highest non-explosive temperature ever reached in a laboratory at that point. It was still no good. In order to attain those high temperatures, the density or the confinement time for the plasma would have to suffer instead.
The official write-up of TFTR does rather hide this fact: if you visit Lawrence Livermore’s page on the reactor, you find this:
“TFTR set a number of world records, including a plasma temperature of 510 million degrees centigrade — the highest ever produced in a laboratory, and well beyond the 100 million degrees required for commercial fusion. In addition to meeting its physics objectives, TFTR achieved all of its hardware design goals, thus making substantial contributions in many areas of fusion technology development.”
Which, in my view, rather hides the fact that it didn’t quite achieve breakeven like it was supposed to.
Naturally, though, there was a lot of important plasma physics to arise out of TFTR. It was through studying TFTR that it became clear that turbulence was the next big potential issue with plasmas — and, with no closed physical or scientific theory even of the kind of turbulence that occurs when you turn your tap on full blast, attempting to gain a full theory of the turbulence of magnetohydrodynamic plasmas seemed a long way off theoretically. Instead, what TFTR allowed physicists and experimentalists to do was explore these new regions — new temperatures, new densities, new confinement times — which had never before been achieved for plasmas, and confirm that their scaling laws held where they did hold… and hope not to crash into any unpleasant surprises along the way. In 1995, TFTR scientists explored a new fundamental mode of plasma confinement — enhanced reversed shear. This new technique involved a magnetic-field configuration which substantially reduces plasma turbulence.
They were working on practical matters that would eventually be crucial for a sustainable reactor, such as recovering the tritium from inside the tokamak — which is important for the reasons discussed previously, if you want your fusion reactor to be somewhat self-sustaining without requiring lots of rare tritium fuel. And, on the physics side, a greater number of “modes” of the plasma were being discovered.
If you’ve ever messed around with a guitar, you’ll know something about modes of vibration and oscillation. When plucked, every string will resonate with a superposition of these modes. A simple mode might be a string that is plucked in the centre, and oscillates through being a simple “hill”, bulging like the crescent moon. The second order mode might have a “peak and a trough”, like a sin wave. A third order mode may have three peaks. Every vibration, every oscillation, can be expressed as the sum of these normal modes — in the same way as you can decompose any sound of any kind into oscillations of many different frequencies.
In a similar way, there are fundamental “modes” for the behaviour of a magneto-hydrodynamic plasma or system. There are frequencies, like the cyclotron frequency, at which the system likes to oscillate and operate. By examining these idealised modes, and working out how a real plasma is composed of them, our numerical simulations of plasma behaviour, our theoretical understanding of it, and hence our ability to predict what it might do in the next reactor all improve substantially.
Of course, not all modes are desirable. In the same way that resonance at the correct frequency can shatter glass or cause a bridge to collapse, certain modes in a plasma can be disruptive or explosive too. One example discovered at TFTR was the possibility that waves in the plasma — of both particles and electromagnetic fields, called Alfven waves — could settle into undesirable modes that would act to accelerate fast ions out of the reactor altogether, resulting in a loss of confinement and inefficiency as the energy from these ions was also lost.
TFTR also contributed to our understanding about the physics of alpha particles in plasmas. Alpha particles, familiar to us as radiation but also of course hot helium nuclei, were produced when a great many fusion reactions took place. In the designs promoted by TFTR, it was important to pay close attention to these alpha particles — because the amount of their energy that they passed onto the still fusing plasma was crucial to determining how much energy would be required to drive a sustainable fusion reaction.
Even TFTR, which would never break-even, managed to increase its fusion power by a factor of a thousand over the Princeton Long Torus, and by a factor of 100 over its own initial operating parameters. Those who championed it would rightly point out that when it was first designed, in the 1970s, the most power that had every been produced by fusion was a few millionths of a watt — enough to power a wristwatch, perhaps, but nothing more. To generate this feeble return required megawatts of energy for the heating of the plasma and the operation of the magnets: it seemed like a futile endeavour. But by the time that TFTR had finished operating, the tokamak was generating millions of watts of power — having improved on many competing designs from the previous decade more than a thousand-fold. The failure to reach break-even doesn’t look so bad when you realise just how far away this was from those earlier reactors, producing tiny amounts of power.
In 1994, the amount of power produced by fusion by TFTR was around 10.7MW. It wasn’t quite enough to break-even, even then — but at that time, it was a world-record for energy generation from fusion power. The success or failure of fusion experiments was no longer being measured in brief bursts of neutrons indicating that some kind of thermonuclear reaction had taken place — instead, the energy from the reaction was a pretty big fraction of what had been thrown in. A flicker in the darkness, rather than just the hint of a spark. But it would prove to be, arguably, the high point of American research and development into magnetic confinement fusion. Already, by the 1990s, government investment in magnetic confinement fusion was following the so-called Logic I path. In the 1970s, various different trajectories for fusion funding had been mapped out, with various different end destinations. Logic IV might correspond to a working reactor by 2000, Logic III promised a working reactor by 2020. This was the path marked “Fusion Never.” And, indeed, as the US budget dwindled, and TFTR failed to achieve break-even, it certainly seemed that nuclear fusion might not have the stars and stripes planted in it after all.
But this generation of fusion reactors was capable of achieving breakeven. It was due to the unusual configuration of TFTR’s plasma — that all-important aspect ratio, and the plasma cross-section, these geometrical quantities that we told you made such important differences to the plasma performance and led to such a huge array of different types of tokamak being hypothesised, to explore all of the potential parameters. TFTR had backed the wrong horse for break-even; it would always suffer from the problem of failing to juggle the demands on confinement time, temperature, and density.
But in Culham, in Oxfordshire, the Joint European Torus was making strides towards breakeven. And, eventually, for the first time, JET would prove to be the fusion reactor that got closest to breakeven.
Next time, we’ll tell you the story of JET and its record-setting heydays. But, of course, merely breaking even is not enough. Any fusion scientist or engineer could show you a wonderful graph which showed the improvements in fusion power, gradually inching towards breakeven. But sustaining this wonderful, logarithmic improvement curve — where fusion reactors generated ten times more power, confined plasma for ten times longer, achieved temperatures ten times higher than ever before, every few years — was only going to get more and more difficult. No single country could fund the next rung on that ladder, the next big tokamak to blast through break-even and towards something that could dream of being commercially viable for its price-tag. As TFTR, JET and JT-60 were vying to be the first to breakeven, there was a plan hatched amongst fusion scientists for a truly international collaboration: one that would be larger in scale than any physics experiment previously attempted, any power plant previously built. ITER: “The Way.”
— Story of JET which ends with that moment that Reagan and Gorbachev reach across the aisle to decide on ITER
— and then we need to take our detours via Seife for Cold Fusion and Bubble Fusion for 3–4 episodes
— Before returning to the modern day and giving an update on the progress of ITER
— then talking about some of the startup companies, interviews etc, and we’re finally done with fusion.
So, realistically this will end up being 20 episodes, which is TEOTWAWKI in proportions…
But Quantum Mechanics will probably end up being just as bad when we get there.
Let’s start with relativity and more classical shit.
Bonus Episode: Penthouse Fusion
There was one fascinating little anecdote that showed up in my fusion research of this era, in the US, in the 1970s — arising with the new wave of tokamaks that was motivated after the initial frenzy over their potential usefulness. It didn’t quite fit into the show, but it was too delightful to pass up, so here for your delectation is a bonus mini-episode that tells the story.
One fascinating little anecdote that arose with the oil-crisis motivated wave of enthusiasm for tokamaks is the story of the so called Riggatron.
It starts with fusion researcher Robert Bussard, who had previously worked on such science-fiction-esque projects as the Bussard Ramjet for nuclear-powered spaceflight.
In 1960, Bussard conceived of the Bussard ramjet, an interstellar space drive powered by hydrogen fusion using hydrogen collected with a magnetic field from the interstellar gas. Due to the presence of high-energy particles throughout space, much of the interstellar hydrogen exists in an ionized state ( — the so called H II regions which show up so beautifully in our telescopes and through false-colour images as the ionised gas recombines with electrons in the plasma, releasing photons of light towards the earth. But because the hydrogen gas is ionized, it has a charge, and it can therefore be manipulated by magnetic or electric fields. The gas is far too sparse to ever dream of collecting physically — even at its most dense, there’s only a few million atoms of hydrogen in a cubic centimetre, which is trillions of times less dense than air. You’d need an extremely large physical collector to get anything like enough fuel to power sustained spaceflight.
But once the particles are charged, you can attract them from a far greater collecting radius. Bussard proposed to “scoop” up ionized hydrogen and funnel it into a fusion reactor, using the exhaust from the reactor as a rocket engine.
It appears the energy gain in the reactor must be extremely high for the ramjet to work at all; any hydrogen picked up by the scoop must be sped up to the same speed as the ship in order to provide thrust, and the energy required to do so increases with the ship’s speed. Hydrogen itself does not fuse very well (unlike deuterium and tritium, the fuels for JET and ITER. Unfortunately, these isotopes of hydrogen are rare in the interstellar medium), and so cannot be used directly to produce energy, a fact which accounts for the billion-year scale of stellar lifetimes. This problem was solved, in principle, according to Dr. Bussard by use of the stellar CNO cycle in which carbon is used as a catalyst to burn hydrogen via the strong nuclear reaction. Yet this can only increase the complexity of any fusion device that would be required — if this was such a simple concept, it would be explored in fusion reactors here on Earth.
Likely in the course of attempting to create this ramjet, he became fascinated with fusion and was caught up in the tokamak wave. Like several previous inventor-visionaries we’ve mentioned, Bussard became obsessed with the potential and possibility of achieving fusion as an energy source that could power the world, and set up his own private company to circumvent the slow-moving process of government grants, international collaboration and scientific consensus.
In this, he was part of a long and continuing tradition of people who thought that the mainstream approach — ever bigger reactors, ever larger scientific projects, ever more-expensive and more complicated devices — was leading nowhere, and that smaller, modular fusion reactors might be possible if only the right technology or design were to be found. We’ll talk about some of these startups that are working in the present day in future episodes. By 1978, he had spent years struggling with the US Department of Energy to get his novel tokamak concept funded.
So what was so different about Bussard’s idea?
The main concept which makes it different to an ordinary tokamak is that it’s vastly smaller. As we’ve already discussed, tokamaks confine enormous amounts of heat, and generate neutrons that can easily damage and destroy any fragile components that are in the way. Using superconducting magnets, which are delicate, means that you need layers of shielding to protect those magnets — vacuum chambers that are sufficiently large to reduce the number of neutrons hitting any particularly delicate surface area, and so on. Bussard’s plan was more or less to abandon this and manufacture a cheap, interior vessel. As soon as it became radioactively weakened or destroyed by the radiation — which could occur in a matter of days — the vessel would be disposed of and refitted, in a process Bussard regularly compared to replacing a lightbulb.
The fact that the Riggatron, Bussard’s device, would expose its magnets directly to the full force of the neutron flux meant that delicate, but powerful superconducting magnets couldn’t possibly be used, and instead copper coil magnets would have to be used instead. Even these would only last for a few weeks before needing to be melted down and replaced — but Bussard hoped that the cheaper cost of construction would ultimately make his fusion reaction a more viable concern.
At the same time as plans were being drawn up for JET and TFTR, the large, multi-million dollar tokamaks, Bussard was claiming that:
“It could produce commercial fusion power as much as 20 years sooner than its Main Line counterparts, saving billions in development dollars. A portable, flexible high energy neutron source, the Riggatron is capable of producing fusion power, fission power, ethanol or cars, oil from tar sands, and nuclear fuel. In addition to all that, it could furnish estimated profits that boggle the mind: One “high growth” model shows Riggatron-based fuel production outstripping Exxon by the year 2000.”
This is, of course, just so much sales talk — the kind of sales pitch you’d hear from an international collaboration of respected plasma physicists and engineers or some dude trying to sell you a third-hand tokamak that fell off the back of a lorry in the parking lot. Bussard was not the latter — he’d worked in industry and as an engineer for years, and held a PhD in Plasma Physics from Princeton, who were leading the charge for fusion in the US at this time.
But Bussard became bitter towards “the establishment” after a $670,000 review of his proposed tokamak project came back with an extremely negative verdict. He felt that the panel that had reviewed his work was tremendously biased because they were all deeply invested in the fusion mainstream — the big tokamaks that were currently being constructed at the main plasma physics labs.
Bussard later said of the study, in an interview with omni magazine:
“It was progress as far as we were concerned. For the first time we had money to explore the engineering parameters that bounded the physics requirements. As a result of that study, I was a hundred percent confident it would work. But the scientific community aligned itself against us. In the spring of 1978, a board was convened to evaluate Riggatron’s feasibility. The panel met and produced a report. The report was so asinine that my company, Inesco wrote a twenty-page rebuttal. The rebuttal got the panel to reconvene and consider it again. They came out with essentially the same kind of idiotic statements. Then they went downtown to higher levels of DOE and made presentations condemning the Riggatron concept.”
The study naturally cast an awful lot of doubt on the idea of exposing materials directly to the heat and neutron flux from fusion reactions. They suggested that the first wall of the reactor would melt, and were also sceptical that Bussard could do away with the more complicated auxiliary heating methods to get the plasma to fusion temperatures — his device just relied on blasting current through the plasma, which had previously seemed not to work. Bussard claimed in Omni that their concerns were based on a lack of expertise:
: That was nonsense. Total nonsense. It was said by people who have no experience in building heattransfer systems that conduct high-heat flows. The kindest thing that you can say about those on the first panel is that they were woefully ignorant of the engineering technology of high-power machinery. That is the kindest thing.”
When describing the DOE’s funding apparatus, he sounds downright conspiratorial:
When asked if there was really a fusion “establishment” in the nation’s capital that excludes all but the Main Line magnetic-fusion programs from getting funded? Bussard replied:
“If you ask people in the government, they would categorically deny it, obviously, because to admit it would be to agree that they are perpetrators of evil. So they say, “Oh, of course not. Anyone with a good idea is welcome to come, and we will to glad to support them.” In reality things are quite complicated. Everything is funded by the Department of Energy establishment through Germantown, Maryland, which is the old AEC (Atomic Energy Commission). Combine this with a small number of people who wander around the country, contending that they should be given government research money to fund new and novel research solutions for fusion. In fact, one can show by known physics that most of these solutions don’t work, which is why the DOE in its reasonable wisdom has chosen not to fund them. But we did not invent a new magical confinement scheme. All we did was to take the world standard confinement mechanism, the Tokamak, and shrink it in size by an engineering approach, not a physics approach. The physics is perfectly sound. We don’t want to fight unproven physics. We never did. But suppose we’re right. Suppose our machines do run as all nature tells me they will. By 1984, we will have five machines that run at power outputs of two hundred million watts for a couple of seconds. Our first commercial plant will be running in 1987, twenty years sooner and at one fortieth the cost of the Main Line program Now who in the national program can be enthusiastic about that? The Riggatron’s swift development could, in their view, put careers on the line. Long-term personal futures are involved in this program. The bureaucracy in Washington, which has planned the main national program, is now faced with a curiosity of a twenty-year-sooner solution that it didn’t invent. That doesn’t make people feel good. It’s human nature.”
And this could well have been the end of the story. The scientific and technological community is littered with tales like this — the establishment is against me, they can’t face the reality of my revolutionary technology and my brilliant ideas, they’re hostile to anything that challenges their position and they’ve closed shop to technological innovation. Sometimes, these bitter outsiders have a point, and were treated unfairly. More often than not, this is the refrain of the crank — I’ve seen it from people who claim to have discovered that all of physics since Newton’s Laws are wrong because they misunderstood the phrase “A wheel does no work” in high school, and have churned out endless books about how they’re a thousand times smarter than those stupid scientists. Rejection can be difficult to handle for people with big egos. And, on the other hand, it’s certainly true that there was always a degree of favouritism when choosing which designs would get the grant money to be built — although that proliferation of magnetic fusion devices that did arrive showed that obviously the entire nuclear fusion industry wasn’t captivated by a single device.
The reality is that you don’t need a vast anti-Bussard conspiracy to explain why his project didn’t get funding. They don’t even need to disagree on the science and engineering challenge of the project that much. Bussard might think that the Riggatron had a 10% chance of succeeding, and consequently it was worth a shot. The government funding agency views that as a 90% chance of failure and so they invest in the slower, steadier, but more likely to succeed route for nuclear fusion. They can have exactly the same assessment of the scientific viability of the Riggatron and still come to different conclusions about whether it should be funded.
Yet at this stage, it looked like Bussard was just going to be another bitter figure on the fringes of the scientific community. When your dream is to develop an energy source that you believe will utterly change the world, and you really believe you have the capacity to do it, it’s perhaps no surprise that you will feel this way, even if your beliefs are totally unfounded. Bussard was far more knowledgeable about fusion and reactor design than me: he probably had every reason to suspect that his design was superior. But evidently he was unable to persuade enough people of this fact, and he was out in the cold.
That was at least until he met a guy called Bob Guccione — a multi-millionaire who’s probably best known for how he made his money. Bob Guccione founded Penthouse magazine, which is, um… not actually about apartment design. Wikipedia helpfully describes it as “aimed at competing with Hugh Hefner’s Playboy magazine, but with more extreme erotic content.” By the 1970s, Guccione had earned millions of dollars by essentially selling pornography with occasional side-orders of journalism.
He was also already well-known by this point for making rather extravagant investments with that money. He had invested $45m in a hotel in the Former Yugoslavia which went bankrupt after just a year. And he had invested $15m of his own money in the film Caligula, starring Malcolm McDowell, which is a truly infamous movie. It was banned in numerous countries on release due to its intense sexual and violent content, and was widely panned as one of the worst movies ever made. Rotten Tomatoes described it as “Endlessly perverse and indulgent, Caligula throws in hardcore sex every time the plot threatens to get interesting.” Helen Mirren, who starred in the film, disagreed — describing it as “an irresistible mix of art and genitals”. Quite. Naturally, in later decades, the immense controversy surrounding this film led to it being reassessed, and it now enjoys a rather dubious cult classic status in certain circles.
Getting into the life of Bob Guccione is really a topic for a very different podcast — and probably a different host — than this one, but hopefully this gives you a kind of idea: imagine a more eccentric, potentially more unstable, iconoclastic version of Hugh Hefner.
A good anti-establishment figure to talk to if you want to get your nuclear fusion reactor design funded. And this is exactly what happened. In fact, it all started when Guccione read precisely the same interview in the “Omni” magazine which we quoted from earlier — it just so happened to be one of the publications that Guccione owned. Luckily for Bussard, Guccione wasn’t just interested in the fusion of genitals.
Guccione had long been a fusion true believer, saying in an interview that he had long since concluded that fusion was “the only way to go” to solve man’s energy problems, and, like many others in the Cold War, viewed scientific progress as a geopolitical race. “Who creates the first fusion reactor literally controls the world’s energy supply,” Guccione said in an interview, “and if it wasn’t this country, who was it going to be? Russia? Communist China? Imagine having a unique patent on the telephone system and the electric light system combined, because the whole world uses it, especially Third World countries,” he said. “It would totally transform the world.”
So reading this story of a spurned, maverick genius fusion scientist in one of his own magazines was enough for Guccione to invite Bussard to dinner to discuss his ideas.
I’ll quote from the excellent book “Fusion, The Search for Endless Energy” by Robin Herman, describing what happened next.
“Over the meal, Bussard described the difficulties he was having obtaining government funding. After all, the Department of Energy had committed $314 million to the tokamak at Princeton and $100 million to the mirror machine at Livermore. Guccione urged Bussard to seek industrial and business investors, but Bussard had already exhausted that route. The trouble was, Bussard told Guccione, he could not stir any enthusiasm because he could not say definitively when a marketable product would be ready. Bussard believed he could produce a commercially viable mini-reactor in ten years, but there would be no guarantee. Bussard was unaware that his host was already a true believer in the possibilities of nuclear fusion. Bob Guccione decided to finance the mini-tokamak project, hoping that his personal weight might serve as the gravity to draw in other investors.
He knew that the mainstream scientific community expected cracking fusion to take many decades and hundreds of millions of dollars. We know today that, if anything, even these estimates were optimistic.
But, motivated by the same starry-eyed idealism that we’ve seen throughout the fusion odyssey, they felt that if it worked, the consequences would be nothing short of spectacular. In March 1980, Guccione formed a partnership with Bussard and turned over, as he recalled later, some $400,000 in startup funds. Engineers, computer programmers, and metallurgists were hired, and Inesco set up a new shop in La Jola, California, with eighty-five employees.
Over the next four years, as design work progressed and the search for investors continued, Guccione poured in $16 million or $17 million, by his accounting. Predictably, the Inesco scientists who attended international meetings endured considerable ribbing about working for one of the most successful purveyors of adult magazines in the world. Physicists and pinups seemed so hilariously incongruous. But the Inesco team knew Guccione was a serious investor and a sincere proponent of fusion.
That’s what really mattered. Guccione saw the incongruity, too, and he was not without a sense of humor about it all. To oversee Penthouse’s interest in Inesco, the publisher created a subsidiary, which he dubbed Penthouse Energy and Technology Systems, thus creating the acronym PETS. It was a conscious reference to Penthouse’s nude centerfold, “Pet of the Month.””
Despite Guccione’s whole-hearted support for the project, though, it was doomed to failure. $17m was a lot of money, but it was not enough to build a prototype of the Riggatron. The project had relied on their ability to persuade other, like-minded people to invest along the way, and this didn’t pan out. Guccione leveraged all of his business contracts and, um, enthusiasm for the project; he drew attention to it as an investment opportunity in flashy ways
Even in the Omni interview, it seems clear that he was hoping for another chance to get a mainstream government contract by demonstrating that his design was superior with a prototype — and Bussard even suggested that there was a danger that the Russians would construct a Riggatron before the US did.
Perhaps it was the scientific concerns that other scientists would express with the Riggatron, or the bizarre incongruous nature of the thing — appearing to be a pet project with a spurned physicist and a pornography mogul.
Ultimately, the Riggatron failed to attract this additional investment. Bussard and Guccione became increasingly conspiratorial, suggesting that they were being undermined by lobbying from nuclear fission and other investors. Per Robin Herman:
“In 1984, an attempt to take Inesco public flopped after the underwriter failed to sell the last 400,000 shares. Bussard’s dream and Guccione’s gamble were crushed.”
Shortly after this, Guccione stopped funding the project altogether — and while he remained avidly interested in fusion projects, this was his last major financial gamble on the subject until his death in 2010. The Riggatron was never constructed.
Later developments in the tokamaks at JET and TFTR showed that the Riggatron concept could never have worked anyway; it would have likely been ruined by the same instabilities that, in the 1970s, were only just being discovered. While there are clear scientific and engineering motivations to produce smaller, more compact fusion reactors — and we discussed a lot of them in our buzzkill episodes, the idea that perhaps these huge international collaborative tokamaks are just leading to an extremely expensive and complicated way to produce energy that can never be commercially viable — the Riggatron was not the appropriate solution. Its copper-coil magnets would be incapable of providing the magnetic fields sufficient for fusion to work, and Bussard’s ideas of disposing of the large amounts of nuclear waste by feeding them back into the plasma were pretty fantastical given what impurities do to plasma confinement.
Ultimately, even its most die-hard believer, Bussard, abandoned the idea of ever constructing a Riggatron: he knew that the project was far too speculative and unlikely to ever gain the millions of dollars of funding required.
But this did not stop Bussard from pursuing fusion altogether, and he would spend the next thirty years looking into inertial confinement fusion — a device that would accelerate plasma particles into each other at phenomenal speeds, attempting to cause them to fuse together. This was closer to laser inertial confinement fusion than his previous dreams of a small, modular tokamak, and resulted in a device called the Polywell, which still has advocates today. Bussard was still pitching ideas about fusion and the success of his devices until his death in 2007, eternally convinced that humanity was on the brink of making this thing a success — if they’d only listen to him. We’ll likely come back to the idea of electrostatic confinement fusion when we talk more about fusion startups and, well, “alternative” attempts at achieving this dream.
For now, though, we’ll leave the spurned physicist and the pornography baron in their rightful small, but instructive place in the annals of fusion history: starry-eyed, and dreaming of funding the invention that would guarantee the human race a glorious future — one centrefold at a time. At least until they adapt it into a Hollywood movie.
Nuclear Fusion: From JET to ITER
Hello, and welcome to the latest in our nuclear fusion marathon. Last episode, we told you about the gradual squeeze on fusion budgets and funding in the 1970s and 1980s. Alongside this was a general drive towards making steps to make fusion more practical. Scientists wanted to bridge the gap between experiments that were designed to find out fundamental plasma physics quantities and experiments that might teach them something about how to make nuclear fusion a viable power source. This led to an influx of engineers, a serious consideration of some of the practical problems that needed to be solved to harness the power that might be generated by a fusion reactor, and a general move towards a goal of breakeven. It was clear — both to demonstrate to the general public and to funders that progress was being made — that nuclear fusion was going to have to prove that it had the potential to become that clean(ish) cheap(ish) limitless(ish) power source that had been promised for so long.
So the goal of a fusion “generator” that would “generate” as much power as was required to run it was implemented. This naturally squeezed out most methods of magnetic confinement fusion apart from the tokamak — the tokamak was the project that was closest — and it was realised that a tokamak that could breakeven would be substantially bigger and more complex than anything that had previously been constructed. The result? Smaller projects like magnetic mirrors and stellarators lost funding altogether, and global efforts focused on three big tokamaks: JET, TFTR, and JT-60.
At this point, it did seem like breakeven was on the horizon. You could look at a nice logarithmic graph of what’s called the Q value for a fusion reactor. A Q of 1 is breakeven; a Q of 2 means you generate twice the power you use, and so on. Qs were rising steadily by factors of ten every few years, in experiments in machines like the Princeton Long Torus and later TFTR and JET when they came online. Soon, tokamaks went from taking all that input power and generating enough fusion reactions to power a wristwatch to enough to power thousands of homes. It was still less energy than you put in, of course, but the rate of improvement — if extrapolated — made it seem as if breakeven was just another few experimental shots away.
But as we discussed in last week’s episode, the timeline had to be stretched out a little bit. And then a lot. The first generation of machines designed to reach breakeven came online back in the 1980s, back when mobile phones were the size of houses and American Psycho looked like a documentary. Now, it’s 2018; mobile phones are the screens through which we view the world and… um, American Psycho is more or less a documentary.
This episode, we’re going to talk about JET — one of the few currently working fusion reactors in the world, one that I’ve visited on multiple occasions as an Oxford student, and the reactor that currently holds the world record for approaching breakeven.
Let’s zoom back and very quickly overview what Europe has been up to in the fusion game since we last left them behind after the ZETA failure at Harwell. In 1968, when that famous expedition to the Soviet Union to learn about tokamaks took place, the scientists involved were British — but the first European tokamak was the Tokamak TFR in France. This started construction in 1970 and as early as 1973 began operation.
It was quite similar to the T3 tokamak that the Russians had demonstrated beforehand, but a few key parameters were boosted. The plasma current, which you’ll remember involved running an electrical current through the plasma to result in a compressive force from the magnetic field lines, was tripled relative to the T3. It had a thin copper shell — conductive metal helps to stabilise plasmas by allowing charges to move through the shell and counteract any magnetic fields that are established by deformities in the plasma — and it also had a weaker magnetic field that helped control plasma position. This device improved on the T3 in terms of how long it could confine the plasma, and in terms of temperature — but it also introduced the European scientists to the problem of “disruptions”, these sudden and violent instabilities. Eager listeners will remember that one disruption at JET caused the entire tokamak to jump into the air by a few centimetres. These occur when the plasma is polluted by impurities or when its density becomes too high, and this led to an idea that there might be a maximum density that could be allowed in a tokamak before disruptions would occur. Because a denser plasma takes up less space — but also has more collisions and therefore more fusion reactions, producing more energy — the upper limit on density meant that future machines had to be larger.
Remember, magnetic confinement fusion scientists want to have a high triple product — high confinement time, high density, and high temperature. Now the density had an upper limit, tweaks to the magnetic field were attempting to improve confinement time beyond a few milliseconds. And temperature also proved difficult; beyond a certain point, apparently due to particle losses, additional heating seemed to have little effect. It was found that the temperature varied as the square root of the applied heating in these early devices, which made attaining fusion temperature difficult.
As the first generation of European tokamaks was producing these results in the 1970s, the community quickly realised — as the Americans did — that it would be necessary to build a bigger tokamak.
A working group, the Enriques group was created. It had to propose the objectives of the Tokamak to be built, its size and a cost evaluation. This Tokamak would have a circular section and operate with a plasma current of 3 MA. This device would have a powerful auxiliary heating. The main objective was divided into four sub-objectives of study: (1) confinement; (2) plasma heating; (3) impurities produced during the plasma heating and the interaction with the walls; and (4) α particle behaviour, once the required conditions had been reached, their confinement and if possible the plasma heating they produce. All this required a fusion reaction power high enough to be measurable in the presence of the additional heating power. It must nevertheless be pointed out that, at that time, the confinement degradation induced by the heating power was unknown, so that this objective did not seem to raise major difficulties. In fact the device would be intermediate between a laboratory Tokamak working on plasma properties and an experimental reactor operating with tritium.
The main difference between JET and the big American tokamaks is in the cross-section that was chosen for JET — a D-shape, rather than the circular cross section preferred by TFTR. Funding more or less exclusively came from what was the European Economic Community (now the EU), although the Culham site in Oxfordshire in the UK was chosen as the site for the reactor. Naturally Brexit has complicated this whole thing — particularly because, for some inexplicable reason, the politicians chose to also withdraw us from Euratom, which is the European community for regulating atomic power and nuclear energy, as well as managing agreements about nuclear research. There were genuine political fights about this — did anyone really mind if we signed up to international rules about how to deal with radioactive substances? — but apparently every person who voted to Leave was terribly keen for us to have to spend millions of pounds coming up with our own regulatory bodies to deal with nuclear energy and research issues, because if there’s one thing Brexit was about it was about sovereignty for atoms. But I digress. It left the future of JET in some degree of turmoil, but given that it is the largest Western tokamak currently operating and ITER will theoretically be finished in a few years, it will probably just ensure that the final nail in JET’s coffin is when ITER comes online.
The wags might say it is unusual for a tokamak project, but JET produced its first plasma in 1983 — it was actually on time and within its allocated budget from 1977. Fears that the vacuum vessel would be destroyed by pressure from the plasma turned out to be false, and indeed it still operates with the same vessel today.
The subsequent story of JET is much like the life of any other experimental tokamak: the experimenters vary things like the plasma current, the cross-section of the plasma, the methods of heating used (remember, they’re injecting neutral ions to transfer kinetic energy to the plasma and also heating it up by accelerating it with electromagnetic fields) — that kind of thing. The aim is to see how well, and for how long, you can control the plasma parameters — and determine these empirical scaling laws. For example, early on in JET’s operations it was confirmed that the confinement times tended to depend most strongly on the plasma current — which makes sense, as it’s this that actually constricts the plasma to its magnetic axis.
Typically, when JET is operated, the scientists involved have a specific plan for the pulse. The “pulse” of plasma only lasts for around 40 seconds — and you can watch videos of the plasma flicker and glow. For those few brief seconds, the insides of the JET tokamak are the hottest place in the solar system — far hotter than the heart of the Sun, which can rely on gravitational pressure to make fusion much easier than with magnetic confinement.
The pulse might be aiming for a new record in confinement time, or plasma temperature, or plasma density. Much like TFTR in the US, the tokamak has achieved the confinement time (over 1.5s), and the temperature (100 million degrees Kelvin), and the density (10²⁰ particles per cubic metre) that have been calculated to obtain breakeven and even to harness net energy from fusion. The only problem is that it has not been able to achieve all three at the same time.
Heating the plasma now requires three different processes. The current heats the plasma in the same way that passing a current through a thin filament in a lightbulb produces heat; but as plasma resistance decreases as it heats up, this can only heat the plasma to a certain limit. For the rest, the neutral ions injected at speed and the radio-waves that accelerate the plasma with electromagnetic fields are needed.
Some notable physics milestones for JET were its first fusion using deuterium-tritium plasma in 1991, and its record for the maximum power produced in any fusion shot, in 1997.
This is worth remembering, by the way, whenever anyone tells you that fusion is around the corner, or that the latest startup will be producing fusion power commercially in the next twenty years. In many ways, this has become a long-haul game. That record is 16 megawatts of power, which is approximately the same as eight regular wind turbines — finally, measurable on the scale of other power stations. It’s a considerable amount of power.
But here come all the buzzkill caveats. It released this record amount of fusion power for less than a second, of course, because achieving the correct combination of parameters meant that JET couldn’t achieve its record confinement time. And, of course, it still didn’t quite reach breakeven. The power used for heating the plasma in the famous 1997 shot was approximately 24MW. Remember, Q is the ratio of power in to power out, and so Q = 1 is breakeven — ITER aims for Q = 10 as a demonstration power plant, and an actual power plant would need an even higher Q. So by this calculation, JET’s record shot for fusion power achieved a Q of 2/3rds for less than a second.
And if you want to be a real buzzkill, of course, you just need to point out the obvious. The way that this breakeven is defined is not the way that you, or I, or most importantly the energy companies, would define as breakeven. Here, breakeven is defined as generating as much fusion power as the power used to heat the plasma. But that’s only a small fraction of the overall energy used by JET, in operating the huge confinement magnets, in creating the plasma in the first place, etc. In fact, when JET is switched on, it briefly uses approximately 1–2% of the electricity used by the entire UK — between 600–700MW of power for all of the various uses in the tokamak. The power drain from JET is high enough that it can’t draw all of its power from the national grid without causing blackouts in Oxfordshire — instead, they use flywheels, actual rotating heavy wheels that are spun up over time and then spin down to generate brief bursts of high-power electricity.
But, of course, to an accountant — or anyone who’s practically interested in turning fusion into a real power source that can displace fossil fuels — this is a long way from breakeven. In fact, it means that JET — for a brief second — generated less than 2–3% of the energy that was required, in total, to power the tokamak. And, of course, it’s worth adding onto that that none of the energy produced by JET — in the form of heat — was harnessed. We don’t really know, in a working fusion reactor, how much of the heat energy generated will be possible to harness — because one hasn’t been built yet. But we do know that fossil fuel power plants, which have been generating power and being ruthlessly optimised by engineers and capitalists for over a century now, can convert around 30–40% of the heat generated by burning fossil fuels into electricity. In all likelihood, a fusion reactor will be less efficient, at least at first. Which, as far as the accountants are concerned, means that it needs to be a hundred times better before it can even begin to talk about “breaking even” in terms of energy produced. Now, if you read the ITER website, they point out that ITER will use superconducting magnets which require less power — very true, they estimate it will get you down to 200–300MW — but it seems likely that even if ITER does achieve its goal of generating 500MW of fusion power from 50MW of heating… the physicists will say Q = 10, but the accountants will say it just broke even.
Each new generation of tokamaks does tend to improve things by an order of magnitude like that — but now that each new generation of tokamaks is taking forty years and billions of dollars to build, it’s a little less feasible to wave away these arguments.
Nevertheless, JET does hold the record for fusion power generated and the number of fusions occurring in a pulse, and given how many clever people and organisations have shot at that record it’s something to be proud of. They no longer hold the record for highest confinement time, though — that was set by the other one of the big three, the JT-60 in Japan, which managed to confine a plasma for 6 minutes and 30 seconds. At the time of writing, pending some unconfirmed results in China, these are the magnetic confinement records — and they all use tokamaks.
As well as generally advancing the physicist’s understanding of scaling laws, new physics was discovered in JET. It’s probably become clear by now that each new successive generation of fusion experiments reveals another problem, another barrier, another difficulty which requires the next generation of experiments to correct. We’ve talked about previous issues with confining the plasma, heating it to fusion temperatures, and leaky magnetic bottles.
At this point, the key phenomenon that was holding JET’s plasma performance back was turbulence. So it’ll serve us to briefly describe — what is turbulence?
In fluid dynamics, a very simple way of categorising the flow of a fluid is using the Reynolds number. The Reynolds number is a simple dimensionless ratio of quantities that tells you, broadly speaking, how the fluid is behaving. At very low Reynolds number, we have laminar flow — layers of the fluid flowing in parallel layers, with no disruption between the layers. Imagine water in a gentle stream, or viscous honey being spooned around the place. At high Reynolds number, though, the flow becomes turbulent. Instead of the previous motion that was simple to categorise and predict, with layers flowing past each other without disruption, we now have chaotic motion — individual particles spraying in all directions, whirling around in complex little vortices and spirals. “Chaos” evokes the idea of disorder, but it’s also extremely unpredictable: tiny changes in the initial conditions for the flow can result in a completely different pattern.
We see turbulent flows all the time in nature. The rushing rapids of a river over rocks, the flow from a hosepipe or tap turned too high. We can even see a classic example of laminar flow becoming turbulent — light a match or candle, or watch the next time you see someone smoking. The smoke initially streams from the flame in laminar flow, but quickly becomes turbulent, curling and swirling in various directions.
Turbulence is one of the most important remaining unsolved problems in physics. It’s an area of physics where we still don’t fully understand the underlying equations, or how to solve them. Instead, much work is done empirically, by observing the systems and trying to deduce what we can about how they behave on average — while the particular behaviour of the system is still mysterious.
And that’s just turbulence in an ordinary fluid, for ordinary hydrodynamics. Add in the complex, interacting electric and magnetic fields for magnetohydrodynamics, and you can see why the problem of turbulence in plasmas is so complex to solve. It can drastically increase the loss of particles and energy, making both confinement and heating processes far less efficient. In your idealised system, it might be impossible or unlikely for a particle to escape confinement or make it through the electromagnetic field lines in your tokamak; with turbulence, individual particles of the plasma can cross the field lines and escape.
Simple parameters like the timescales, length-scales, and energy scales for these tiny, turbulent vortices and eddies have long since been derived theoretically and measured experimentally: famously, Kolmogorov in the 1940s worked extensively on the problem of turbulence and the way that it dissipates energy in thermodynamic systems, predicting these timescales, lengthscales and energy scales. But being able to sketch the edges of what plasma turbulence might do and being able to predict how it will affect the plasma in your tokamak are very different propositions. Huge amounts of theoretical physics effort and computational simulation time have been devoted to trying to understand plasma turbulence in more detail. Studies will look for large-scale patterns in the turbulence, correlations between one region of the plasma and another, and how the confinement times relate to the turbulent behaviour that’s observed.
One key discovery of this era of magnetic confinement fusion was something called the H-mode. It wasn’t actually discovered at JET — instead, it was found at a German tokamak known as ASDEX — a small tokamak only around 3–4m in diameter. Like other tokamaks, part of the heating of the plasma occurred via the technique of neutral beam injection — accelerating ions to high velocities, neutralising them by combining them with electrons, and then injecting them into the tokamak. When the atoms hit plasma, they become ionised, and stay confined by the magnetic field, allowing them to efficiently transfer their energy to the plasma through collisions.
When the neutral beam heating on ASDEX was turned up past a certain threshold, there was a sudden — apparently mysterious and inexplicable — change in the plasma properties. The plasma becomes more efficient at transporting particles, energy, momentum, and impurities around the tokamak. But the key aspect is that the turbulent motion in the plasma is suddenly and dramatically suppressed in this H-mode, which leads to doubled confinement times. This is why it’s referred to as the H-mode — the H stands for high-confinement, while all previous plasma states are referred to as L-mode — low confinement. The ITER website notes that, if H-modes had not been discovered, it might have been necessary to build a tokamak twice as big to dream of achieving breakeven — doubling all of the issues we’ve discussed with making fusion practical.
While tales of the H-mode were initially met with scepticism, it has now been a focus of intense study for the last quarter of a century, and the design of most tokamaks and stellarators aims now to move operations into this regime of plasma behaviour. The precise reasons that it behaves as it does are still somewhat mysterious. In traditional understanding of turbulence, it’s driven by gradients in pressure, temperature, or number density — and this follows what we understand in fluid turbulence normally, too; after all, fluid turbulence when a tap or hose is turned up too high arises due to large pressure gradients. In H-mode confinement, these gradients increase, but turbulence is suppressed. One of the experimental observations that appears to help explain this is shear poloidal flow at the edges of the plasma. In other words, while the bulk of the plasma flows around the axis of the donut-shaped tokamak, at the edges in the H-mode it flows at right angles to that, up the sides of the donut and parallel to the magnetic field lines.
Some plasma physicists view this as a kind of self-correcting phenomenon within the plasma; its fluctuations and instabilities become so strong when the neutral beam heating is increased that, via electromagnetic induction, they’re almost inducing a flow at the edges which serves to cancel them out. Theories along these lines have led to the more modern idea that, rather than trying to cancel out every possible plasma instability, perhaps by creating an incredibly fine-tuned magnetic field — which may not be possible with any reasonably sized apparatus — you’re better off attempting to “work with” the instabilities and relying on self-correcting effects like this to take care of them for you.
Yet this strategy may not be successful — because just as H-modes suppress one brand of instability, another is introduced. So-called “Edge-Localised Modes” or ELMs arrive in the H-mode. Because there are sharp gradients of density and temperature towards the edges of the plasma in H-mode, you get these occasional, violent, periodic expulsions of particles and heat from the sides of the plasma — which smash into the tokamak walls and can cause a great deal of damage to the equipment. Because they are so rapid and they involve such sudden bursts of energy, not only do they reduce the efficiency of the device but they can also damage the first wall and the diverter with high levels of heat and particle flux into the edges of the tokamak.
Together with the disruptions that still plague tokamaks, finding some way to understand and tame these edge-localised modes is still a major concern in fusion engineering, design, and in theoretical and experimental plasma physics. There are mechanisms that allow ELMs to be completely suppressed in tokamaks, but they have their own disadvantages — the impurities in the plasma due to the neutral beam heating rise too quickly, leading to disruptions in the plasma. Often it can feel like the story of trying to get this strange, mysterious, and complex phase of matter — plasma — to behave in an appropriate way is like juggling. Tweaking this parameter gives you better confinement times, at the expense of temperature. Taming this instability tweaks another into being. You can see, given the history, the complexity and intricacy of tokamak design and plasma physics — and the fact that we don’t fully understand the physics here — why dreamers might think that here, on the verge of breakeven, we just need to stumble upon the right plasma mode, the right magnetic field configuration, the right serendipitous discovery, and we’ll have cracked nuclear fusion. You can also see why cynics might suggest that new problems will continue to spring up with each apparent development, and prevent fusion from ever being commercially viable — even if ITER achieves a power gain.
Pulses at JET aren’t just for testing this kind of new physics, though. A pulse at JET might be trying to test out a new piece of equipment. Recently, as they’ve become more and more focused on becoming a testing ground for the next tokamak, ITER, this has been more common than attempting to reach new peaks in plasma performance. For example, we’ve talked in previous episodes about some of the issues with practical fusion reactors: materials need to be able to stand up to bombardment from incredibly hot neutrons that are difficult to stop, and deal with radiation from accelerating and decelerating plasmas, surviving disruptions and protecting the delicate electromagnets that actually confine the plasma. JET is the only facility that can produce conditions even remotely resembling what ITER will be like, particularly when it comes to the hot neutrons. For that reason, in recent years, they’ve tested a tungsten wall — with the highest melting point of any metal, this aims to limit the plasma impurities that arose in previous tokamaks due to the plasma becoming contaminated as it melts the walls of the reactor, which can lead to disruptions.
We also discussed how valuable tritium was as a fuel for nuclear fusion under the deuterium-fusion reactions; and that, in fact, for fusion reactions to be remotely economically viable and sustainable, you need to be able to recover as much of the tritium from the interior of the wall as possible. For this reason, they’ve used walls that contain beryllium. Beryllium is an element most of us don’t have much cause to deal with — although it’s surprisingly low on the periodic table. Three protons in a nucleus gives you beryllium. Hydrogen, helium, beryllium, lithium etc. 4 out of every million atoms in the Earth’s crust are beryllium, which gives you an idea of its rarity — for that reason, it can cost between $5,000 and $10,000 for a kilogram of the stuff. And yet its unusual properties make it one of the leading candidates for use in ITER. It doesn’t absorb tritium, and stands up to the punishment from those hot neutrons better than carbon-fibre based materials which quickly become radioactive and have to be replaced. Recent plasma pulses have focused on testing the performance of these beryllium and tungsten materials — but also, the distribution of the materials. Beryllium’s melting point is over 1000K, but smaller than Tungsten’s 3000K. Beryllium is therefore most suitable for operations where there are limited interactions between the wall and the plasma — the tungsten layer is for the diverter, where leaking hot particles of plasma actually come into contact with the wall of the reactor, which dissipates heat by transferring it to a coolant. This diverter configuration is very important for sustaining heat inside a plasma — it acts like a mass spectrometer, separating particles according to their masses, and removing the heavier elements and impurities from the plasma that cool the plasma and dissipate the energy that’s needed for fusion.
And the divertor could be another big problem with a fusion reactor — because it’s all very well creating something that won’t melt for the seconds of operation under JET, or the minutes of operation under ITER, but if a fusion reactor is really going to operator for hours or days at a time producing a steady baseload of power than you’ll need a substantially stronger material than we can create now, or possibly a different technique that involves allowing the plasma to collide with layers of neon gases before hitting the divertor. If you don’t have a divertor, though, these loose particles of plasma will erode the walls of the reactor, resulting in plasma impurities that will cool the plasma below fusion temperatures or cause disruptions. Part of what JET showed is that a configuration where the divertor is at the bottom of the tokamak, towards the bottom edge of the plasma cross-section D, is likely the best place to channel the leaky plasma. Indeed, it was adding the divertor that allowed them to reach the record fusion energy production we discussed earlier.
Studies in this ITER-like wall want to determine how much tritium fuel is retained by the beryllium tiles, and precisely how adding these new materials into the reactor changes plasma impurities. It seems likely that — until at least 2025, when first plasma is due at ITER, and maybe even 2035, when deuterium-tritium experiments are due to begin in earnest, JET and the results from JET will remain crucial to our understanding of how tokamaks work — and hence crucial to the possibility that ITER, and magnetic confinement fusion more generally, will be a success.
Okay! This JET episode was pretty mammoth, but it’s worth spending time on a tokamak that is so near and dear to my heart. We actually have a bonus episode coming up based on materials from Culham which will describe a JET pulse. But, as our fusion story moves closer to the modern day, we’re going to leave magnetic confinement fusion for a few episodes.
We’ll discuss what happened to the major inertial confinement fusion experiments using lasers, including the huge National Ignition Facility in the US. We’ll discuss the international pact that led to the creation of ITER. We’ll talk about some of the dark horse startups that are trying to make nuclear fusion a reality. But first, we have to deal with one of the most infamous episodes in the history of fusion — and yes, I’m talking about Fleischmann and Pons.
Bonus episode: description of a JET Pulse
This description of how a pulse run is actually done at JET comes from Culham’s website at CCFE, and I thought it was a pretty good summary of how the fusion reactor actually works at present — and the various complications that can be involved. I’ve preserved most of it as it was originally written, but added in a few extra details here and there. Enjoy!
Unlike future stations when the plasma will need to run for several hours or continuously, on JET, each pulse typically lasts around 40 seconds. Although this may seem like a short amount of time, the 80,000 or so plasmas already created in the machine since 1983 have provided crucial information for plasma science by focusing on its behaviour and improving its performance. The pulses have all contributed to an extensive knowledge bank of data (with over 100 TeraBytes of data collected to date) and their analysis plays a key part in the long-term goal of fusion electricity on the grid.
Like all good stories — the story of a pulse has a distinct beginning, middle and end, a strong setting and involves many people in this case engineers, scientists and computer experts who all have fascinating parts to play.
The story begins…
Chronologically, the story begins at 07.00 in the JET Control Room. This is the operational centre of the JET facility where experiments are undertaken in two eight-hour shifts. In this story the first shift of the day is just starting and 15 pulses will be run in its duration.
Before today’s pulses can take place, scientists and engineers from fusion laboratories all over Europe have applied to carry out experiments on JET. Their proposals have been discussed by a Task Force and the direction of experimental campaigns identified. A steering committee has matched submitted proposals with the current scientific requirements of JET, and each experiment has had one lead Scientific Co-ordinator assigned to it.
Back in the Control Room the first pulse of the day, a ‘dry run’, is about to take place. This is the chance early in the morning to check that all the systems needed to operate JET are running smoothly and a profitable day of experiments will follow. None of this, can of course, take place without the staff working in the control room during a given shift and these include the Engineer in Charge, Session Leader, Shift Technician, Scientific Co-ordinator, and Physicist in Charge. All of their roles will become clear as the story unfolds.
Running the pulse
Now to the running of first experimental pulse of the shift. Having checked the status of the machine following the ‘dry run’ pulse, the Session Leader pre-programmes the next pulse. The main responsibilities of the Session Leader are to prepare a realistic experiment plan, which fulfils the wishes of the Scientific Co-ordinator and his/her team as far as is possible within the operational limits of the machine. Based on this plan, the Session Leader prepares the basic types of pulses which need to be run in a session. He/she programmes the details of these pulses — time evolution of things like density, plasma current, magnetic field, plasma shape into the so-called ‘pulse-schedules’, which contain the information required to run a plasma pulse.
Control RoomDuring the shift the Session Leader, starting from the prepared pulses, adapts the pulse schedules in response to the results of previous experiments, and in discussion with the Scientific Co-ordinator. When the Session Leader is happy with the pulse schedule for the next pulse, he/she transmits this schedule to the Engineer in Charge (EIC), who checks that the pulse is safe to run. The Session Leader also communicates with the Physicist in Charge, who is responsible for setting up various diagnostic systems with the Heating System operators.
Before the pulse can begin the power supplies need to be enabled. The power to make a pulse comes partly from electricity directly from the grid and additionally from stored energy in two massive flywheel generators on the Culham site — with roughly 50% coming from each source.
As the flywheels are enabled, in the Control Room, the parameters have been decided and set by the Session Leader and confirmed with the EIC. The EIC then ensures that the operators of the required subsystems, power supplies, computer systems, heating systems and essential diagnostics, are ready and asks the Shift Technician to start the countdown for the pulse. Once this happens, a computer controlled initialisation sequence begins.
Countdown to plasma
After two minutes the sequence is held. At this point the EIC checks that all the systems are functioning correctly. When satisfied the EIC asks the Shift Technician to trigger the pulse — they are the staff who actually ‘push the button’ to make the plasma happen.
The pulse is triggered at zero on the countdown which is marked by a siren noise — an announcement to all in the control room that a plasma will be created on JET. Forty seconds later, the plasma is created and can be seen on a dedicated screen. A number of checks are performed automatically during the pulse and if limits are exceeded the control systems will terminate the plasma by gently ramping the plasma current down.
The Session Leader and the EIC also watch the infra-red camera cameras closely and listen attentively to audio feedback from inside the machine. It is very unlikely, but if anything is judged not to be right, the Session Leader or EIC can push a button, which triggers a gentle rampdown in a similar way to the automatic stops.
In this story, as most commonly happens on JET, the pulse goes ahead as planned. During the first 40 seconds after the end of the countdown, the currents in the large magnetic coils surrounding the vacuum vessel are ramped up to create the required magnetic field inside the vessel. This field has to be just right to allow a plasma to be created inside the vacuum vessel. At 40 seconds a minute amount of gas (deuterium in routine experiments) is injected into the vessel and a strong electric field is induced which ionises the gas, making it into a plasma.
JET plasmaThe plasma which has now been created is a very good conductor of electrical current.The electric field, which was initially used to create the plasma, now generates a strong current in the plasma. This current is ramped up in a controlled way to very high levels, typically 4–5 million amperes. The current in the plasma itself generates a strong magnetic field of its own — this adds to the magnetic field generated by the various magnetic coils to hold on to the very hot plasma, without it cooling down by touching the inside of the vacuum vessel.
Once the plasma is well established, the detailed shape of the plasma is controlled using external poloidal coils. Only the edge of the plasma is visible on the screen in the control room, as only the edge radiates in the visible range of wavelengths.
After a short while a significant amount of current is put into a set of special divertor coils situated just below the plasma itself. The field generated by these coils is so strong that it is stronger than the field generated by the plasma itself in the vicinity of the coils. The so-called x-point occurs at a position where these two fields cancel out. Magnetic field lines just above the x-point will trace out doughnut-shaped surfaces which never get near the vacuum vessel.
Just a little further out from this surface magnetic field lines no longer form closed donuts, they will in fact all finish by hitting the divertor in the bottom of the machine, which is designed to absorb most of the power. When the x-point is formed the bottom of the machine, the divertor becomes bright. Once the plasma is in this x-point configuration, additional power is applied.
This power, mainly from the neutral beam injection system, puts up to 35 million Watts of power into the plasma, heating the plasma up to ~150 million degrees Celsius. At this point the plasma surface, and in particular the divertor becomes very bright. A strong ‘shaking’ of the image is also observed on the dedicated screen. This shaking is associated with phenomena called ELMs (Edge Localised Modes). These ELMs can be seen as small solar flares as they expel significant burst of energy at regular intervals, typically 10–50 times per second.
The phase with high power lasts from 5–20 seconds after which the power is switched off. Then the plasma current and the magnet field are slowly ramped down and the plasma extinguishes when this plasma current approaches zero.
What happens next
Once the pulse is over, up to 60GB of data is collected. Some is reviewed immediately but the majority is stored for long-term analysis. The pink glow on the screen in this experiment lasted 40 seconds, the longest pulse ever run was one minute long.
The next pulse will be run in around 30 minutes. Before the next pulse ‘story’ can begin the Session Leader will analyse the behaviour of the pulse to check that it did what he requested, and use this information to finalise the ‘pulse schedules’ for future experiments.
The length of the pulse on JET is limited by engineering design and cost consideration. Two factors mainly limit the duration. The magnetic coils, though cooled, heat up during a pulse and when the temperature reaches a limit, the current has to be ramped down to avoid cumulative heating over many pulses which could cause damage to the coils. This duration was part of the original design and much stronger cooling, or superconducting coils would be required to extend the pulse length significantly. The second thing that limits the pulse length, is the fact that the plasma current is maintained by a voltage induced in the plasma by varying the current of the main transformer coil.
The voltage can only be maintained as long as the current in this ‘primary’ coil varies. To get the longest possible pulse the current in the primary coil is ramped up before the pulse to it’s maximum value. During the pulse, it is then ramped down at the rate required to have the desired plasma current. When the current in this primary coil reaches its maximum negative value the plasma current can no longer be sustained.
In the design of JET the pulse length was fixed, in order to have 10–20 seconds at full power and field. In plasmas, most things vary on timescales of less than 1 second and hence after 10–20 seconds very little changes in the behaviour of the plasma. This means scientists can learn almost everything they need to from plasma pulses of 10–20 seconds duration.
Nuclear Fusion: Cold Comfort
The physicist and philosopher Thomas Kuhn described what he called the “essential tension” in the sciences and in scientific research. It’s what he called the tension in the sciences between tradition and innovation. Most scientific research builds on, expands, or refines the existing picture of the Universe that we have: it forms a part of other theories, using their results and their ideas to explain or explore some other aspect of reality.
Very often, this is how theories are developed: by a long series of different contributions from different individuals, conducting experiments, observing, and trying to fit their results into the existing frameworks, or using their results to expand those frameworks, applying them to solve new problems. You’re part of a scientific community, and a school of thought.
But, just on occasion, it’s necessary to totally discard the existing theories about what might be possible. Just on occasion, your experiment has produced results that cannot be explained by the existing theory. Every now and then, there is some innovative idea that flies in the face of basic principles of the previous theory. You think of Copernicus and Galileo; you think of Einstein’s theory of relativity; you think of the revolution in quantum mechanics that inspired so much of what Kuhn wrote. These are not mere tweaks around the edge of the old ways, but entirely new, first-principles re-imaginings of the Universe. Particles are no longer little billiard balls flying around through space, but “probability wavefunctions” that have no fixed position, momentum, or energy. Time is no longer a constant, ticking clock that moves at the same speed for everyone in the Universe: instead, it depends on who’s got the clock, and how quickly they’re moving relative to each other. These are not minor details.
And so, we have the essential tension: when to work within the frameworks and the theory that already exist, and when to throw the whole apparatus out the window because it no longer works.
In light of this essential tension, when someone comes up with a brand new theory, how does the scientific community respond? Does it stay reactionary, wedded to the old paradigm and the old way of viewing the world until some truly convincing evidence is presented? Or, does it embrace the new theory and the innovation, and explore its consequences? How can you tell if this new theory is the next quantum mechanics — or just the raving of a crackpot?
The sociologist of science, Professor Harry Collins, notes that one of the ways you can identify a fringe scientific belief is what they call “pathological individualism”; in other words, people with fringe views reject virtually everything that the mainstream believes in — even if they have no alternative explanation and no valid criticism of what already exists.
The tales of these pioneers of science, these lone revolutionaries who are smart enough or lucky enough to revolutionise their individual fields — these are wonderful tales. It’s no wonder that they seduce some people into believing that they might just be one of those revolutionary figures. In the field of nuclear fusion in the 1980s, with the stakes so high, with multi-million dollar experiments trying and failing to produce net energy, it’s easy to imagine people wanting to overthrow the paradigm of ever-larger and ever more complex and expensive tokamaks and laser fusion devices to create something innovative, new, and revolutionary.
Science faces this debate endlessly, this endless tension. On one side, you will often hear people denounced as fraudsters, crackpots, attention-seekers: on the other side, you will hear the mainstream denounced as old fuddy-duddy reactionaries who are jealous of real genius. Sometimes, the people going against the mainstream are right. This was not one of those times.
On March 23rd, 1989, there was a press conference at the University of Utah. That’s the first remarkable thing: scientific breakthroughs are usually announced with big, splashy papers in peer-reviewed journals. Yes, there’s press attention and a press-release if the work is significant — but you rarely announce what you’ve found without some concrete data and details to show people.
In this press conference, two electrochemists from the University of Utah who few people had heard of — Fleischmann and Pons — announced that they had achieved sustained nuclear fusion reactions. Their new method didn’t need a colossal tokamak filled with plasma and complex, twisting magnetic fields. It didn’t need lasers or beams of particles to heat that plasma up to millions of degrees. Instead, they announced, they could achieve fusion at room temperature, in a basement laboratory, using an apparatus that fit on top of a table and cost a few hundred dollars. All you needed was a battery, some water, and the crucial, secret ingredient: palladium metal.
They weren’t planning to announce too many details straight away — not until their device had been patented. After all, what Fleischmann and Pons were announcing may well be the new means of making power, cheaper and cleaner and safer than anything attempted before: they wanted to make sure they were going to reap some of the rewards. The Nobel Prize that they were sure to win for their discoveries was just the icing on the cake. But they did announce that their fusion reaction had produced “heat, neutrons, tritium, and helium: the expected by-products of fusion reactions.”
The test-tube apparatus that they’d used was under constant guard, as the scientists themselves appeared on magazines and TV shows, catapulted to fame by the discovery that would change the world. In their press conference, they were accompanied by the President of the University — Chase Peterson — who was not one to undersell the significance of what they had achieved. He said that the discovery ranked up there with “the discovery of fire, the cultivation of plants, and the uses of electricity.” No big deal, then.
Meanwhile, the scientific community responded. Some were immediately cynical — well, wouldn’t anyone be if they’d spent decades working on the immense complexity of magnetic confinement fusion only to be pipped at the post by a couple of chemists that had rendered all of their work obsolete, using equipment that you might find in a high school lab. Besides, it contradicted all the consensus about fusion: how on Earth were the nuclei in Fleischmann and Pons’ experiment getting enough energy to overcome the coulomb barrier, the electrostatic repulsion between the nuclei. We have described how this usually requires pressures and temperatures comparable to those you find at the heart of the Sun — the idea that you could circumvent that with some batteries and a piece of metal seemed far-fetched to say the least. Equally, the fusion scientists knew all too well how easy it was for there to be false dawns — to see something that looked like a promising avenue to fusion, only to realise that it was far more complicated than you had initially hoped. Yet, without any details of the experiment, there was always a vague possibility that they had indeed discovered some completely new and rare effect. Fleischmann and Pons might not have been world-famous, but Fleischmann was a Fellow of the Royal Society — a high scientific honour for British scientists, given usually in recognition of important scientific work. Between them, Fleischmann and Pons had dozens of papers previously published in reputable scientific journals.
They weren’t obviously fringe crackpots, but well-established if not world-famous scientists: this made their work difficult to completely dismiss out of hand. Yet the fact that the scientists were hearing about this first on the evening news was unusual, to say the least.
At the same time, in other quarters, people were concerned about the military applications of the new technology. Would there now be wars for control over palladium resources? In our Buzzkill episodes, we talked about how fusion reactors and the hot neutrons might allow people to create fissile material for nuclear weapons from more innocuous isotopes. If anyone could use this technology to create nukes, then the delicate balance of the world that appeared to be arising in 1989 as the Soviet Union collapsed could be thrown into chaos, disarray, and war.
The situation was compounded the very next day, when a rival group of scientists headed by Stephen Jones at Brigham Young University — also in Utah, just a few miles from Fleischmann and Pons — announced that they had been investigating this “cold fusion” for years, and had conducted a similar experiment that had shown signs of fusion by producing neutrons. Suddenly, it seemed as if Utah was the new capital of fusion research, and the state — and then federal — government began pouring millions of dollars into commercialising and exploiting this new discovery.
Of course, we now think of cold fusion as one of the most infamous and embarrassing episodes in the history of science — an enormous amount of hype over scientific results that were, at best, a misapprehension, and at worst, fraud. Fleischmann and Pons’ experiment wasn’t producing fusion at all: it was precisely as impossible as all the nay-sayers in the mainstream had said it would be. But, for its impact on the history of fusion science, and as a fascinating example of when science goes wrong, it’s worth going into the story more deeply. How did this happen? How did it unravel? How were so many people taken in by the false results, and how can we avoid similar fiascos from happening again?
To do that, it’s worth looking into what Fleischmann and Pons were claiming in more detail and why they couldn’t be dismissed completely out of hand.
Palladium, the secret ingredient for their cold fusion experiment, is an interesting element. You might have some in the catalytic converter in your car’s exhaust pipe. It is an excellent catalyst for chemical reactions owing to its unique crystal structure: it brings different atoms closer together, allowing for chemical reactions to take place more quickly.
As early as the 1920s, shortly after Rutherford had discovered the nucleus and people had begun to theorize that helium was made up of two hydrogen atoms squished together, scientists were trying to use palladium to help build hydrogen into helium. Palladium is very good at absorbing hydrogen — it can absorb approximately 900x of its own volume of the gas, soaking it up like a sponge soaks up water, where the hydrogen atoms sit in the gaps in the lattice structure. This property has some people interested in palladium for use in hydrogen fuel cells: one of the problems with using hydrogen as a fuel is that its energy density is very low, so you require great big tanks of the stuff to get anywhere. Some experimentalists called Panneth and Peters in the 1920s had attempted to cram as much hydrogen as they could into a palladium sample, in the hope that it would force the atoms close enough together to react and produce helium.
Sure enough, they detected miniscule quantities of helium — but not enough, at that point, to be practically useful in the airships of the day. A few years later, another scientist — Tanberg — had the bright idea of adding electrolysis into the mix. Electrolysis is used today, all the time, to produce hydrogen from water: you pass an electrical current through water, which breaks the molecular bonds between hydrogen and oxygen in water. The hydrogen atom loses its electron, becoming positively charged, while the oxygen atom gains an electron and becomes negatively charged. They’re then attracted to either end of the circuit — to the positively charged anode or the negatively charged cathode. Tanberg thought that perhaps using these electrostatic forces, which would attract lots of hydrogen towards the cathode, might result in enough pressure and density to produce a significant quantity of helium. He even tried to patent his device, but was told that his description was far too vague: no one could possibly be able to reproduce the results. As nuclear science progressed further, and it became clear to everyone that you’d need deuterium to fuse into helium, Tanberg repeated his experiments with heavy water. His aim here was nothing less than producing energy from fusion: he even warned colleagues that, if his calculations were correct, the machine might explode with considerable force.
Of course, Tanberg never managed to achieve fusion, with hydrogen or deuterium. It later turned out that the initial detection of helium was faulty: the detector had simply been detecting some ambient helium in the atmosphere that had accumulated on the device. There was no evidence that the palladium device had ever produced helium from hydrogen at all.
The amazing thing here is that this was all done and dusted by around 1930. Yet it bears a striking resemblance to the experiment that Fleischmann and Pons attempted in 1989, nearly sixty years later, to such fanfare — with the same results.
Of course, Fleischmann and Pons didn’t know this. They had long careers investigating “electrochemistry” — the properties of liquids when you pass electrical currents through them. When they realised, over some whiskey, that it might just be possible to use those electrical currents to exert extreme amounts of pressure on deuterium in a palladium catalyst — and maybe, just maybe, create nuclear reactions — they couldn’t resist giving it a try. According to Frank Close’s book on Cold Fusion, “Too Hot To Handle”, Fleischmann said “It’s a billion to one chance — shall we do it?”
The initial experiments were the kind of million-to-one shot pet project that many scientists probably have. Not realising that their work had been anticipated sixty years before, the pair didn’t want to tell anyone about their ideas — nor did they think that anyone would possibly fund such an unlikely experiment. So they bought the equipment and funded the experiment themselves with around $100,000 of their own money.
According to later accounts from the pair, the first time they really thought they were onto something was in 1984 when their test apparatus exploded overnight. They didn’t immediately think that this was necessarily a nuclear explosion — after all, there were plenty of ways that a gas build-up or pressure explosion could’ve occurred — but they also didn’t feel like they could rule out the release of energy from fusion, either. A few years after this, they had started to reliably measure a small “heat excess” from their device.
If this heat was genuinely being produced by fusion, and wasn’t a systematic measurement error or caused by ordinary chemical reactions, then they would expect to produce plenty of neutrons that would irradiate the equipment: but their radiation detectors only measured a slight excess, far less than the billions of neutrons that they would’ve expected if these were fusion reactions. Nevertheless, by this stage, at 1988, Fleischmann and Pons — perhaps still remembering the curiosity of that explosion years before — were looking to take their pet project more seriously.
To do that, they needed funding. And to get that funding, they went to the US Department of Energy. As its current secretary (as I write this), Rick Perry — who initially wanted to abolish the DOE as a Presidential candidate — found out, a big part of the DOE’s remit is in nuclear weapons and nuclear research in general. The DoE were, in fact, already funding some research into cold fusion experiments — by Stephen Jones, at Brigham Young University. The DoE weren’t going to throw money at just any project, so they insisted that the grant application was to be peer-reviewed. Other scientists would need to look at what Fleischmann and Pons were doing and determine if it had a prayer of success.
The DOE couldn’t think of anyone better than Stephen Jones — the very man who was working on similar experiments in the same state, and who had been interested in another kind of cold fusion — muon-catalysed fusion — for many years.
This moment is really at the root of the cold fusion chaos that followed. Once Jones had peer-reviewed the work of Fleischmann and Pons, they entered into an increasingly frantic race to be the first group to get good, reliable results and to publish the findings. After all, if there was going to be Nobel Prizes, fortune and fame, it would go to the first people to publish, and not those who followed up.
Things get a little murky here, because every character in the story obviously has their own motivations, their own side, and their own story about precisely what happened — usually to save face or to defend the reputations of colleagues, and also due to faulty memories. It seems as if, perhaps at first, the two groups entered into a loose collaboration, with Jones offering the Utah pair the use of his neutron detector. Soon enough, though, the race heated up. They had agreed that, when the experimental results were all ready and had been fully analysed, both groups would publish a paper in Nature together, announcing their joint findings. But there was deep distrust between the groups. Some from the University of Utah accused Jones of stealing the idea for his cold fusion experiments from the application he’d reviewed.
Jones sent in an abstract (a brief description of a project) to the American Physical Society saying that he had discovered a “new form of cold fusion” in February of 1989. In the meantime, FP tried to measure neutrons coming from their device. The most obvious products of nuclear fusion, after all, were neutrons and heat. Having seen a heat signature, they were now looking for the neutrons to confirm their findings.
But this experiment — the final experiment, just a month before the press conference announcement — was rushed. A proper experimentalist would have taken a proper reading of the neutron “background”. Some neutrons are being produced all the time, by traces of radioactive material, and by cosmic rays that constantly bombard the Earth’s atmosphere: so any good neutron detector will always measure a few neutrons, from time to time, but this varies from place to place. FP didn’t have time to run repeated background readings for 50 hours without running their experiment, so they had two neutron detectors running — one at the site of the cell, and one 50 metres away. They found that their cell’s readings were approximately three times higher than the background, and decided that this was evidence that the experiment was producing neutrons.
You might be thinking — hang on, three times the ordinary background noise? Surely, if fusion reactions are generating energy here, there should be a huge number of neutrons — after all, isn’t the neutron flux enough to destroy the metal walls of tokamaks? And yes. In fact, as many experimentalists would find out later, it’s quite possible to see variations on the neutron reader this high from pure fluke alone — it’s hardly the 95–99% confidence that physicists typically want to report results as a sure thing. For me, this is the most inexplicable bit of the whole announcement. Fleischmann and Pons had found two vague pieces of evidence for fusion reactions in their device — the excess heat, and the neutrons — but rather than reinforcing the idea that this was really fusion, they actually seemed to contradict each other: they weren’t consistent at all. A more thorough experiment would have measured the energies for the neutrons — were these neutron energies consistent with that which was generated by fusion?
Actually, to their credit, Fleischmann and Pons did try. Harwell laboratory — the same place with the infamous ZETA reactor which you’ll remember from previous episodes — had some of the best neutron-measuring equipment in the world. They’d hoped to send their apparatus over to England for someone to take the measurements in a hurry: but it was classified as a radiation hazard and held up in airport security. With this effort frustrated, another hurried effort to measure the neutron energies measured what the team hoped were gamma rays that were being produced as a by-product of the neutrons hitting the apparatus itself. The experimentalist who measured the results — originally employed as a radiation safety officer at the University — did so under extreme time pressure in two days, but did seem to find a “peak” that might be consistent with the “energies” of the neutrons that were being produced.
At the same time as FP were trying to confirm what they thought they’d seen, the heads and administrations in the respective Universities were meeting up for a summit. Already, they were getting wildly ahead of themselves about the nature of the discovery. Fleischmann and Pons had observed a small amount of excess heat — maybe 25% more than they expected — and a low hum of “neutrons” with one hastily-run detection experiment. It was certainly an interesting anomaly, and something that needed explaining, but hardly proof of anything — let alone something so unlikely and unexpected to begin with. But to the University administrators, this was already a limitless source of energy with billions of dollars at stake.
And for those in the University, the motivations are obvious, too. Perhaps they were a little frustrated by the reluctance of the scientists to announce their findings. And they were thinking of the money from the patents, the funding for future experiments. Imagine being the people who let this opportunity slide: it’d be like the record executives who didn’t sign the Beatles.
So you have to conclude that it was a perfect storm. With pressure on them from the head of the University to announce the discovery so that they might benefit, with starry-eyed dreams of being the heroes of a new form of energy, and afraid that they might be beaten to the punch by someone who might’ve stolen their work.
The days running up to the press conference were chaotic for Fleischmann and Pons. Two days beforehand, Fleischmann received word back from Harwell, where one of his experiments had eventually arrived: they had detected no neutrons. Pons reassured him — perhaps they’d set up the experiment incorrectly: it was sensitive, and didn’t seem to work every time they ran the test.
Apparently, the scientists had some misgivings — as you’d think they should have — that the momentum of events was running away from them, and running away with itself. At least, this is what they say. But Fleischmann hardly helped himself by tipping off a reporter for the Financial Times in the UK, where he lived, about the results of their discovery. Because of a bizarre confluence of holidays and timezones, this reporter published the story on the morning of the press conference — which meant that over 200 journalists descended on the University of Utah to see the announcement, a truly colossal number.
Even as results were coming in that cast doubt on their shaky experimental findings, the cold fusion hype machine was whirring into overdrive. Fleischmann, for his part, just wanted to get through the press conference and then return home to think things through. Unfortunately for him, he didn’t appreciate quite how dramatic things were about to get.
Of course, it’s worth saying that the media didn’t help. For example, the Daily Telegraph in the UK reported that you could build Fleischmann and Pons’ apparatus at home for around £90 — resulting in the Harwell laboratory being flooded with calls from members of the public, asking for advice on building their own cold fusion reactors at home. Meanwhile, plenty of more reputable labs were also trying to repeat the experiment, leading to a flurry of contradictory results. Some people saw the excess heat: others didn’t. Some Universities seemed to “detect neutrons” in their apparatus — others didn’t. Some Universities saw an excess heat signal — other groups claimed to have detected tritium, another expected by-product from the reactions. The fact that the apparatus was easy to build — or, at least, try to build — was partly to blame here, as it allowed dozens of institutions to conduct their own hasty experiments. All of these experiments were conducted very quickly in the week or two after the press conference, without full access to the details of the setup. None of them demonstrated a consistent set of results.
Things got much, much worse for Fleischmann and Pons when details of their work were first presented to a large scientific audience at the Harwell lab. They showed the gamma-ray measurements. This was supposedly the killer piece of evidence that their neutrons genuinely came from fusion reactions. But they didn’t look anything like the scientists were expecting. For a start, they expected to see several small peaks — from the various different reactions that were taking place. Instead, they just saw one peak. This had been interpreted by Fleischmann and Pons as evidence for neutrons from fusion — but it was in the wrong place. The gamma rays from deuterium should have occurred at 2.22MeV of energy, but the peak was at 2.5MeV. This was obviously wrong, as it was more energy than should be released in this kind of fusion reaction. The peak was also the wrong shape. It was clear that these measurements were wrong.
Yet, a few weeks later when the paper describing this experiment eventually came out — not in Nature, but in the Journal of Electroanalytical Chemistry, the peak had mysteriously moved back to the correct spot. Yet Fleischmann and Pons hadn’t corrected their mistake everywhere — the figure of 2.5MeV still appeared in the paper. This is really the first indisputable, outright evidence of actual scientific fraud from the pair.
Everyone will remember a few scientific results that got an awful lot of media attention at first, but later turned out to be mistakes. I remember a few years ago when the faster-than-light neutrinos were “announced” — but, of course, they turned out to be experimental errors. In that case, the team who discovered them said that they thought it was most likely an error — but that they published the results to see if anyone else could help them work out what it might be.
At this stage, with all the doubts and misgivings about the science behind their experiment, the inconsistencies and the confused results, Fleischmann and Pons should not have announced their work with the fanfare that it had. Even after it had started to be criticised, they should have admitted that they weren’t sure. But it seems as if events had just spiralled out of all control. With dozens of conflicting results coming out from different universities, some of whom had apparently reported heat and neutrons in those earlier weeks, Fleischmann and Pons perhaps had reason to hope that the cold fusion phenomenon was real, and the inconsistencies in their experiment could be ironed out.
Naturally, the scientists working on “hot fusion” — who’d spent decades trying to crack this problem, and knew just how hard fusion was to get right — were amongst the harshest critics. Some scientists at MIT — who well knew the spectrum of energies from the gamma rays that were the supposed proof of “neutrons” — scoured TV footage of the instruments to try and find details of the experiment to disprove it. They went so far as to reconstruct the gamma ray spectrum from a brief shot of it on video — noticing that the peak was in the wrong position, and not surrounded by reliable measurements from other instruments. Some may have hacked into Pons’ email account. Those in the cold fusion camp knew that they would cause waves. After all, if they were right, there was no need to spend millions of dollars building tokamaks to get fusion to work. Funding was already being diverted from “hot fusion” projects to cold fusion.
Things got even worse when they presented their research to the American Chemical Society. At first, thanks to some classic rivalry between the scientists, the chemists were sympathetic to the idea that perhaps they’d succeeded where generations of genius, smart-alec physicists had failed. There was a huge amount of media hype surrounding the experiments. [LIST EXAMPLES OF COLD FUSION MEDIA HYPE, HEADLINES].
Yet a question quickly arose — why hadn’t they run an obvious control experiment? Ideally, in a control experiment, you take measurements where things are exactly the same — save for the one difference that you’re trying to measure. For FP, they were attempting to measure deuterium nuclei fusing together with a palladium catalyst when electrical currents were passed through heavy water. Replace the water with heavy water, and you obviously shouldn’t see any fusion, but you can keep the apparatus running with its electrical current and its palladium cathode in place. When one of the chemists asked why they hadn’t done this, Pons’s response was concerning: he said “We do not get the total blank experiment that we expected.” In later conferences, both of the scientists would refuse to talk about experiments with ordinary water altogether.
In other words, Fleischmann and Pons’ apparatus appeared to be measuring “fusion reactions” when there was nothing there that could fuse. How could they possibly be sure that any of the heat or so-called “neutrons” they were observing were really from cold fusion reactions, in that case? That’s the kind of discrepancy that makes you sit up and notice. Combine this with the fact that — if the heat was genuinely being produced by fusion reactions — the scientists expected to see millions of times more neutrons than they were actually observing.
Soon enough, the Universities that had reported corroborating evidence began to back down or tone down their claims. They couldn’t produce the results consistently. The excess heat had dropped off to much lower levels. There was a particular neutron detector that had been used by plenty of the original groups to look into Cold Fusion. This neutron detector was designed to deal with signals from billions upon billions of excess neutrons at a time — the kind of signals that a genuine radioactive source or fusion reactions might give you. Under these circumstances, the neutron detectors functioned really well — so no one had ever really noticed a problem with them before.
But it turned out that small temperature fluctuations could lead to small excesses in the neutron detectors — the type that Fleischman and Pons had seen, and taken as evidence for fusion. When some of the Universities that had set up their own experiments, and controlled the temperature around the neutron detector, they saw the neutron “signal” disappear. They’d been measurement errors, all along. Nature rejected the joint papers that were submitted from the two Utah Universities, telling them to go away and try again.
This is the part of the story where Fleischmann and Pons should have admitted their doubts, and resolved to perhaps go away and work on the experiments some more to respond to the criticisms that were being levelled at them. The fact that they didn’t, and carried on in what must have been increasingly desperate hope that they really had seen something — including, quite probably, falsifying results in the Journal paper — while it might not be outright fraud, it’s definitely scientific misconduct.
As well as Fleischmann and Pons, and the University administrators who pressured them into releasing their work before it was ready — and then tried to sell it to the government for millions of dollars before it had been confirmed — the news media deserves a little of the blame for running with the story despite the discrepancies.
I often wonder how much trust in science is damaged by the people who write headlines. If the headlines — especially in some of the trashier newspapers — are all you ever glance or skim over, you’ll get the impression that “scientists” are the biggest bunch of fantasists and hype merchants that you’ve ever seen. Hardly a week goes by without a headline reporting on some world-changing discovery, aliens or fusion or room-temperature superconductors or the end of the world. More often than not, they’re misquoting the scientists, identifying fringe science as part of the mainstream, and massively simplifying or exaggerating the findings. Everyone needs clicks, and everyone needs traffic, and everyone’s attention span is about three seconds long. But it can’t help but undermine people’s confidence when this is what they see, and nothing ever seems to result from it.
You can say that the journalists didn’t understand the science — and perhaps this was a genuinely entirely new phenomenon, so no one in the world really did. That’s true enough. But even if you don’t understand the science, it’s best to be cautious. Here’s a fact about the cold fusion fiasco I didn’t really appreciate, from Frank Close’s book “Too Hot To Handle.” As we discussed, the secret ingredient for FP’s fusion cells was palladium — the catalyst that, in their theory, allowed deuterium to fuse. But palladium is a rare element, difficult to mine and extract. People looking to replicate the Fleischmann and Pons experiment were already running into difficulties in obtaining palladium, while the FP cell only produced a small excess of heat. Claiming that Cold Fusion was the source to limitless energy, then, is incorrect: the amount of Palladium being mined at the time was only perhaps enough to power a single, medium-sized power-plant. So, even assuming cold fusion was real, you could dismiss the idea that it would power the world overnight with a simple back-of-the-envelope calculation.
To all intents and purposes, Cold Fusion — born in that press conference on the 23rd March, 1989 — was dead by the end of April 1989. By then, dozens of separate problems had been found with the initial experiment: various groups had tried and failed to reproduce the findings (which was necessary if cold fusion was to be useful anyway), which made it seem even more likely that the results were in error. Combine that with the fact that they flew in the face of all known physics, and would require new and unlikely phenomena to take place — even though this type of apparatus had first been investigated in the 1920s — and there really wasn’t much to recommend cold fusion.
But the same combination of factors that led to the initial announcement also led to many of the protagonists to double down. The well-known idea that a lie can get around the world before the truth gets its boots on is true in the sciences as well. And Fleischmann and Pons, in the face of mounting evidence that they’d made a catastrophic mistake, remained true believers in the phenomenon that they’d discovered.
They took with them a very small cadre of other researchers, who had become intoxicated with the excess heat or neutrons they thought they’d seen, and the promise that — if only they could get this process to work just right — they might yet achieve the fame, the billions, the scientific progress, the boundless source of energy that they’d been looking for. The state of Utah, where the two main “cold fusion” experiments had been conducted, ploughed an additional $5m into the research, creating the National Cold Fusion institute. Isolated groups continued working on Cold Fusion into the 1990s. The US government gave up, and the National Cold Fusion Institute shut down, by 1991. Fleischmann and Pons themselves were employed for several years by the Toyota corporation in Europe, before they eventually grew tired with the lack of results: similarly, the Japanese corporations that had looked into Cold Fusion had all shut down their research by the mid-1990s, after spending millions of dollars. Since then, there have been sporadic bursts of activity. In 2004, for example, the Department of Energy organised another review into the state of play in cold fusion research — and even though the review concluded that “the effects are not repeatable, the magnitude of the effect has not increased in over a decade of work, and that many of the reported experiments were not well documented”, many cold fusion advocates were at least happy to have their work given serious consideration by the government.
In 2012, in an echo of other research into fusion by Penthouse Magazine founder Bob Guccione, the billionaire Sidney Kimmel watched an interview with a physicist about cold fusion on CBS news and became convinced that cold fusion might be a road to energy production. He invested about $5.5m in the Low Energy Nuclear Research department at the University of Missouri — which, it appears, has since been disbanded with its first director claiming to no longer believe in cold fusion.
There are still communities of cold fusion researchers to this day — although they usually avoid the cursed formulation of “Cold Fusion”, instead preferring to talk about “Low-Energy Nuclear Reactions” or “Catalysed Fusion Reactions” and avoid the stigma associated with the disaster in 1989. Some of them publish in legitimate scientific journals from time to time — many more publish in illegitimate scientific journals, and hold conferences in the same kind of venues as UFO truthers and 9/11 conspiracy theorists (and their websites are often very similar.) In a lot of cases, there’s a cult-like mentality of pathological science — it’s motivated more by a desire to be right where everyone else is wrong, and resentment towards the perceived arrogance of the mainstream scientific community, than any actual new results or developments in the field of late.
Prominent amongst these is Andrea Rossi’s “E-Cat” device, which we dealt with in the thermodynamics episode on Free Energy Scams. It is fascinating to me that you can still go online and see forums of people quite seriously debating which form of cold fusion is the superior option for satisfying the world’s energy needs, while ignoring the fact that the whole phenomenon appears to be an illusion that has very little to do with any nuclear reactions whatsoever. At best, their research might lead to some interesting new chemistry. I don’t mean to say that there’s no interest in studying electrochemistry — Fleischmann and Pons had very productive and successful scientific careers doing just that, before they fell down the cold-fusion rabbit-hole and wrecked it all.
But Rossi, who is often held up as one of the leading lights of this movement online, is an obvious fraudster. His so-called E-Cat device makes use of a fusion reaction that’s even more energy-intensive than the one claimed by Fleischmann and Pons. If it worked, everyone stood nearby would be killed by gamma radiation from the reactor. He has never demonstrated the device in public without it being plugged into the mains — it’s never been subjected to any proper peer review for the decade that he’s been plugging this nonsense. Perhaps most damning, however, is the fact that Rossi has been trying to pull versions of the same con for his entire life. In the 1970s, he claimed to have a company that could produce oil from industrial waste: but this scheme was exposed as a fraud, and he was imprisoned. In the early 2000s, he was employed to produce thermoelectric devices that performed less than 0.1% as efficiently as he claimed they would. Now he’s flogging cold fusion. If this is the public face of your movement, and the source of all these conspiracy theories, then you need to take a serious look at yourself in the mirror.
I’m sure there’s at least a few of you out there who think that I’m being excessively harsh to cold fusion, and maybe displaying some of that arrogance that has led people to be alienated from the mainstream of science.
Charles Seife attributes the longevity of Cold Fusion scandals to this reaction, which some saw as knee-jerk: “The outrage over how FP were treated helped to keep cold fusion alive. The smackdown in May had the air of a public lynching. In its wake, the climate in the physics community turned from scepticism to scorn. A number of people leapt into the fray on the side of the underdogs.”
There are understandable reasons to want to believe in the underdogs. Any newspaper or article-writer dealing with cold fusion isn’t going to get a headline out of saying “Cold Fusion remains dead.” This show would probably get more listeners if I was claiming to hold the secret to unlimited energy, but that I was being brutally repressed by an arrogant scientific establishment that couldn’t accept that they were wrong… rather than telling you the truth. Conspiracy theories in subsequent years were further fuelled by a cold fusion scientist killed in an explosion during one of his experiments, and another physicist and science writer who had supported cold fusion was murdered outside his home in 2004. [He was killed by someone he’d just evicted from that house as he was preparing it to be rented, along with their accomplices, and those people are all now in jail — but it hardly helps to dampen down a conspiracy theory when someone is mysteriously murdered.]
But the reality is that the claims of cold-fusion advocates simply don’t hold water. Across the world, across various nations, across various organisations, hundreds of millions of dollars have been spent pursuing this phenomenon, chasing this phantom. Three decades of scrutiny have been given to the claims of cold-fusion advocates. It remains on the fringe for good reason. Extraordinary claims — like the idea that you’ve found something totally new in your test-tube experiment that contradicts all known physics about nuclear fusion — require extraordinary evidence. If you want to overturn the scientific paradigm, you need to be able to persuade people that you must be correct.
The evidence that nuclear reactions are genuinely going on is extremely thin on the ground. And as for the small amounts of excess heat that are observed in the reaction — if they’re not calorimetry errors, they might just be some kind of chemical reaction, or results from the electrocatalysis. None of these millions of dollars in experimentation has been able to reliably reproduce these heat excesses; and no one has built any kind of useful device using these reactions. I imagine that if every potential candidate for, say, a battery, got the same degree of scientific attention, we would probably have far more interesting phenomena to talk about than the strange phantom that Fleischmann and Pons discovered. And it’s in this that cold fusion is really frustrating. It should really be seen as a success story for the scientific method: a claimed new discovery was made, it was tested and peer-reviewed, and it was ultimately found to be lacking in merit — so it wasn’t pursued. All this happened in the short space of a few weeks. Yet, instead, millions of dollars and many years of effort from otherwise talented scientists have been poured down this particular drain without any progress being made. Now, especially in the Buzzkill episode, I’ve made it clear that lots of people view magnetic confinement fusion or inertial confinement fusion efforts as a waste of money — because they haven’t realised any net power yet.
Indeed, there is a certain parallel between cold-fusion researchers and other fusion scientists. Not in the rigour of their method, or the merits of their science, but the fact that they’re all intoxicated by a similar dream — even as people sneer and say that it’s impossible, or that they’ve been promising too much for too many years. It’s a noble dream, of a clean future where cheap energy liberates us all from environmental destruction and lifts people out of poverty without contributing to climate change or resource depletion at the same time. In that sense, there are similarities.
But at least actual nuclear fusion can point to real progress: confinement times have gone up, our understanding of the ways plasma can behave has improved, energy production has risen with each new generation of devices, and we have developed new devices and new technologies as a by-product of the research. The fact that it may not ever be commercially viable doesn’t mean that it’s bad scientific research: it just might make it a bad investment if you’re interested in something mainly to make a whole stack of money. But cold fusion has no such success story: there have been no developments in the devices or experiments used, there has been no consistent theory of what’s going on in these reactions. They have done nothing to enhance our understanding of nature, and have little prospect of ever working out — mainly because the fundamental ideas behind them are without much scientific merit. The field essentially exists and survives on that dreamy nostalgia, and that delusion, that began in 1989. If Fleischmann and Pons had done the right thing, and only published results they were sure of, that could be produced consistently, then there would be no need for episodes on cold fusion, because no one would ever have heard of it.
The whole affair has, instead, damaged the credibility of science, and surely done more to hold back scientific development than many other experiments you can name. The thing that it’s most valuable for is as a cautionary tale for future scientists: it takes a great deal of evidence to be genuinely sure that you’ve discovered something brand-new and phenomenal, and you should be open and honest in your research, expose your claims and experimental method to repetition by others, and gracefully accept when you made an honest mistake. Otherwise, there is nothing for your theory but to become another embarrassing by-word for pseudoscience. It may seem harsh, but it works.
So it’s time for a quick overview of where we are in this marathon nuclear series, and where we’re going next. So far, we’ve followed nuclear fusion from its first theoretical underpinnings, through the hydrogen bomb and early experiments. The tokamak revolution came along and became the main magnetic confinement fusion experiment, and the JET tokamak came the closest to producing net energy from fusion reactions. The ITER tokamak collaboration, between the US and the USSR, was agreed towards the end of the Cold War as a next-generation tokamak that would be the first to achieve scientific breakeven. All the while, however, there was growing concern that fusion could ever be commercially viable on this scale, with these machines. This lead to magnetic confinement fusion facing competitors — from some realistic sources, like laser fusion or inertial confinement fusion, and some far less realistic ones, like the cold fusion experiments we’ve just described.
Over the next few episodes, we’ll talk about inertial confinement fusion — how it developed, and the story of the National Ignition Facility — the largest laser fusion experiment yet attempted. We’ll also discuss the ITER project — its inception, and how it’s developed over the years, taking us right up to the present day. Alongside that, I’ll describe some of the many start-ups and companies like Lockheed Martin that are, in the modern era, trying to outflank ITER by producing fusion by some other means — some of them tokamaks, some inertial confinement, and some entirely new — alongside other magnetic confinement experiments, like the Wendelstein Stellarator in Germany, that are reviving old ideas. So, in short, we’re going to bring fusion right up to the present day, and then look at its prospects into the future.
I don’t know how long I’ll keep doing this show — but I hope for a very long time. If that’s the case, perhaps we’ll be able to come back in a few years when ITER is switched on and see how all of those predictions I made pan out. And, of course, we’ll also be able to see how well the various startups we profiled are doing.
That’s all in the future, though. We’re nowhere near done with the past yet! I’ll see you next time. Until then, take care.
Nuclear Fusion: Secret Codes, Secret Tests, Supernovae
Hello, and welcome to Physical Attraction. This week, we’re going to jump back a little in the fusion narrative. You’ll remember from last time that, during the 1970s and with the invention of the laser, inertial confinement fusion had first started to be proposed. The basic idea here was that — using lasers to symmetrically compress a fusion capsule — you could, very briefly, attain sufficient densities and temperatures to release net energy from the capsule. The principle is closer to a controlled H-bomb than magnetic confinement fusion, which aims for a longer confinement of the plasma as it fuses. For this reason, it naturally sparked some military interest, and perhaps for that reason, the main centre for research was in the USA.
However, the laser fusion scientists quickly discovered that — unless the irradiation of the capsule was extremely symmetric — Rayleigh-Taylor instabilities during the implosion, where tendrils of plasma flew out in all directions rather than being compressed — prevented net energy from being produced by these experiments.
To attempt to remedy this problem, the solution was ever-larger devices with ever-more lasers, mirrors, lenses, and optical equipment. As with magnetic confinement fusion before it, laser fusion discovered new and exciting ways in which plasma could fail to do what it was supposed to — and found that the only possible solution was to build ever more expensive experiments to try to overcome the latest instability they’d encountered.
Some of the optimism surrounding laser fusion involved a somewhat mysterious, classified set of computer code known as LASNEX. Versions of this code have been in use in many of the major laser fusion experiments since the 1960s. Like many computer models, it breaks down the deuterium pellet into a series of individual grid boxes, and then uses our best knowledge of how X-rays and elementary particles interact to try and predict what will happen when the experiment is run. Each grid box will have, for example, energy for the photons, energy for the electrons, densities of photons and electrons stored — then, it will gradually advance through timesteps, integrating the physics step by step, in a similar way to simulations that are used in aerodynamics, for the fluid-mechanics and performance of cars and aeroplanes. Physicists can then, ideally, use this model to experiment with how small changes to the design will work — what happens if the laser is a slightly different frequency, or if we illuminate here, or use this particular energy? — without going to the trouble of building thousands of different devices and testing them all.
The only problem what that LASNEX repeatedly predicted that the physicists were far closer to breakeven than they actually were. When the Shiva laser fusion system was built in the 1970s, LASNEX had lead the scientists to believe that it would achieve breakeven, but it was frustrated by Rayleigh-Taylor instabilities and ended up producing less than 0.01% of the predicted energy. Chances are that, like all models, the information was incomplete. Sometimes, the model can work well despite this — and LASNEX was pretty good at predicting the behaviour of low-energy plasmas. But it failed to anticipate the instabilities and problems associated with higher-energy plasmas — in the same way as Newtonian mechanics fails to predict the behaviour of particles at high energies. It’s not that Newtonian mechanics is a bad model: you can get to the moon using its predictions, and understand plenty of phenomena in the world around us. It just stops being applicable at a certain point as new phenomena become important.
Results from Shiva in the 1970s were incorporated into LASNEX and its modelling, and they motivated the construction of the Nova fusion device. This laser, built at Lawrence Livermore National Laboratory and completed in 1984 — just a year after the JET Tokamak — consisted of beamlines 91m long, folded in on themselves so that each was really 182m long. The light that’s produced — for a few nanoseconds, a pulse of trillions of watts, comparable to the entire world’s energy consumption at any one time, focused on a small capsule of deuterium — comes from ten separate beams, and is reflected to bathe the capsule from many different angles. The whole device cost $200m, and its design and construction phase was torturous — in 1979, for example, John Nuckolls, one of the key fusion scientists, realised that he’d made an error in his calculations, and that the device — which they were already building at the time — wouldn’t achieve breakeven as intended. The modified Nova design was supposed to be able to achieve breakeven.
And, while it managed to produce plenty of fusion reactions — approximately 10 trillion neutrons and hence fusion reactions between deuterium nuclei every time the device was fired — it still wasn’t enough. That energy release, even if it could all be usefully harnessed which is unrealistic, was only around 5 joules. That’s around enough energy to lift a bag of sugar by 1m, which is obviously not quite the yield you’d hope for from your pellet of deuterium and your $200m fusion device. And, of course, it wasn’t breakeven, because the laser consumed kilojoules of energy with each shot, so it was thousands of times away from achieving net energy release.
Again, the problem was that they couldn’t uniformly illuminate the capsule. Matching the energy provided by each of the beamlines to the degree of precision that was required was beyond Nova’s capability. The result was turbulence in the plasma, hot spots and cold spots, and the familiar-by-now Rayleigh-Taylor instabilities that prevented the capsule from behaving like it did in the simulations. Descriptions of the scientific output of Nova make this clear: another project that was intended to be the breakthrough, the moment that scientists could claim they’d cracked fusion and start genuine, concerted, international efforts towards building practical and commercial reactors — and another project that became just another step along the road, falling short of its goals.
At this point, you might be wondering something about the laser fusion efforts. Namely, well — why are the scientists trying to work “up” from smaller and smaller devices to larger ones? We already know how to get inertial confinement fusion working. After all, incredibly symmetric compression of a fuel capsule to release huge amounts of energy — that’s how a fusion, H-bomb works. So, instead of scaling up experiments with lasers, why not try to scale down hydrogen bombs? At least that way, you’re starting from something that you know can produce fusion. If it turns out to be impractical to overcome the Rayleigh-Taylor instabilities without causing a gigantic explosion, then presumably, it’s better to find that out by scaling down atomic bombs, rather than building huge lasers that can’t ever create commercial power but might be able to make atomic-bomb style explosions.
Of course, like most good ideas in science, someone else has already come up with it. In the late 1970s and 1980s, at the laboratories that had pioneered nuclear research for so many years at Los Alamos and Livermore, two sets of experiments were done using underground nuclear tests. These were referred to as “Halite” and “Centurion” — and, in these sets of experiments, atomic bombs were used to create the X-rays that compressed a fusion capsule of deuterium and tritium — much in the same way as hydrogen bombs work.
The problem with these experiments — as you might expect with underground nuclear tests that have obvious weapons applications — is that the results are highly classified. We know that, for example, one of the things they likely did was move the deuterium fuel capsule further from the source of the X-rays as an attempt to determine how much energy, or how asymmetric the illumination, needed to be before the capsules were igniting and producing the energy as they’d hoped.
Charles Seife quotes one Leo Mascheroni, who used to work as Los Alamos. He claims that the Halite-Centurion devices received millions of joules of energy without igniting — thousands of times more than was being produced by devices like Nova — and even then, 80% of the capsules failed to ignite. According to Mascheroni, the LASNEX code was unable to predict why these failures were going on.
However, it’s worth pointing out that Mascheroni — who left Los Alamos as a disgruntled scientist when they refused to work on his ideas for laser fusion — may not be the most unbiased observer. Shortly after Seife published the book, he was indicted by the US authorities. Apparently, he had met with a Venezualan agent, and told him that he could help Venezuala develop a thermonuclear bomb, based on his experiences and expertise from Los Alamos. At one point, he demanded $800,000 for a plan that laid out how to build a nuclear bomb in a ten-year programme, and offered to fly to Venezuala to help this take place. [It’s pretty amazing to consider that he was well into his 70s when all this went down… anything for a quiet life?] Unfortunately, the Venezualan agent was really an undercover FBI agent, and Mascheroni’s plan to sell nuclear secrets was exposed. These are hardly the actions of a guy without a grudge.
So, unless we’re going to indulge in an elaborate conspiracy theory where Mascheroni knew that the US laser fusion experiments were a waste of money, and he was arrested by the FBI on trumped-up charges to silence him… then we really have to conclude that he’s not the most trustworthy person [although he was released from jail last year, and is still living in the US, so Mascheroni, on the off-chance that you’re listening and you want to defend yourself on the podcast, feel free to get in touch via the contact form on www.physicspodcast.com].
Nevertheless, the secrecy surrounding the Halite and Centurion tests was indicative of one of the problems holding inertial confinement fusion back: because of the weapons potential of what was being developed, all of the data remains secret to this day. No scientific peer review, except by those already employed to work on the project. When people come back from Halite-Centurion and argue that the results showed that they’re on the right track towards laser fusion, and that the next experiment will do it, there’s really very little we can do to dispute it without access to the data. Yet science done in this way is slower. Fewer people are involved. Mistakes and errors in thinking can persist for longer. So some scepticism is surely justified here.
Before we move on from Nova and into the world of the National Ignition Facility (NIF) — which I really view as being like the Inertial Confinement Fusion equivalent of ITER for tokamaks, one last, huge, multi-billion dollar and multi-decade attempt to really get this thing working — there’s a rather interesting coda to how Nova was used.
Starting in the late 1980s a new method of creating very short but very high power laser pulses was developed, known as chirped pulse amplification, or CPA. Starting in 1992, LLNL staff modified one of Nova’s existing arms to build an experimental CPA laser that produced up to 1.25 PW. Known simply as Petawatt, it operated until 1999 when Nova was dismantled to make way for NIF.
The basic amplification system used in Nova and other high-power lasers of its era was limited in terms of power density and pulse length. One problem was that the amplifier glass responded over a period of time, not instantaneously, and very short pulses would not be strongly amplified. Another problem was that the high power densities led to the same sorts of self-focusing problems — the kind that had caused versions of the device to burn through parts of its lenses and mirrors as we discussed in previous episodes. But these devices were at such a magnitude that even measures like spatial filtering, where parts of the laser and particular hot-spots and cold-spots were blocked off by passing the laser through pinholes — would not be enough, in fact the power densities were high enough to cause little self-focused laser filaments to form in air.
CPA avoids both of these problems by spreading out the laser pulse in time. It does this by reflecting a relatively multi-chromatic (as compared to most lasers) pulse off a series of two diffraction gratings, which splits them spatially into different frequencies, essentially the same thing a simple prism does with visible light. These individual frequencies have to travel different distances when reflected back into the beamline, resulting in the pulse being “stretched out” in time. This longer pulse is fed into the amplifiers as normal, which now have time to respond normally. After amplification the beams are sent into a second pair of gratings “in reverse” to recombine them into a single short pulse with high power.
These ultrashort, extremely powerful laser pulses have made a new type of laser fusion — fast ignition — conceivable, and this concept was tested quite extensively at Nova. The idea behind fast ignition is to add yet another laser pulse to the original ignition — a extremely short burst of energy at the core of the fusion pellet, at the moment when it’s at maximum compression. You can imagine this as the extra “spark” that ignites the fire when the plasma is already hot and dense, by ensuring that the conditions have that much more available energy to ignite a fusion reaction that can produce net energy. Usually, how this is done is using a hollow cone of material in the very centre of the fusion capsule — that’s where you blast your quick, ultra-hot ignition laser.
According to the Lawrence Livermore National Laboratory:
An advantage of the FI approach is that the density and pressure requirements are less than in central hot-spot ignition, so in principle fast ignition will allow some relaxation of the need to maintain precise, spherical symmetry of the imploding fuel capsule. In addition, FI uses a much smaller mass ignition region, resulting in reduced energy input, yet provides an improved energy gain estimated to be as much as a factor of 10 to 20 over the central hot-spot approach. With reduced laser-driver energy, substantially increased fusion energy gain — as much as 300 times the energy input — and lower capsule symmetry requirements, the fast-ignition approach could provide an easier development pathway toward an eventual inertial fusion energy power plant.
Nowadays, many of the people who are still working on inertial confinement fusion — and there are plenty — few fast ignition as one of the best routes towards fusion. But, like rivals to the tokamak in magnetic confinement fusion, the counterargument against getting too excited about them is that the science is less mature. In fusion, as we’ve seen plenty of times by now, there is often good reason not to get too excited about a brand-new idea: it often means that you haven’t encountered whatever big, horrible plasma instability makes your idea impractical — and hence that you’ll need several generations of device to get to comparable performance with the big, established guns, which their proponents would argue is money better spent on the next ITER or the next NIF, or at any rate, some technology with a proven track record. Indeed, more recently fast ignition itself has fallen out of favour and has been replaced by ideas like “fast shock” fusion that aim to overcome some of the challenges found in the theory and early experimentation with fast ignition. Maybe the idea isn’t so wonderful as its proponents make it sound — but others will disagree, and will view this turning point — where Nova was dismantled and the fast ignition concept replaced by a bigger version in NIF — as a sad day for the practicality of laser fusion. In the event, the decision was taken to build NIF — a much bigger, more conventional laser fusion experiment that would attempt to solve the problems of previous experiments.
But it’s worth mentioning this second life of the Nova experiment — to point out that there are other routes worth exploring to ICF that may yet come good, and also because it links to a previous episode. Namely, our 2018 Nobel Prize special episode. Because the development of chirped pulse amplification, which made all of this possible, was developed by Donna Strickland and Gerard Mourou, who won the 2018 Nobel Prize in physics for the work they’d done decades before that had become the standard in so much of laser research. So — on the off chance that this does turn out to be the One True Path to commercially viable fusion — in a nice way, the winner of the Nobel Prize last year would have been instrumental in making that happen.
Next time, then, we’ll look at the story of NIF — the biggest Inertial Confinement Fusion experiment that humans have yet performed. It will come as no surprise to you to hear that — yes, it ran billions of dollars over budget, and, yes, it was severely delayed in its completion. But we must soldier bravely on, through the next generation of physicists to run at this particular wall — to tell the tale of how it started, and precisely what went wrong. Telling the story of NIF, which is still running today — albeit as a slightly different beast to its original design — will take us right the way up to the present, and the future. See you there.
Nuclear Fusion: NIF-ty Business
The National Ignition Facility at the Lawrence Livermore Laboratory in California is the largest laser device yet constructed in the world. It represents perhaps the most instantaneously powerful human invention on Earth — when its beam fires, for a few picoseconds or million millionths of a second, it irradiates its target with 500 terawatts of power. By comparison, the entire human race currently consumes on average around 13TW of power, in the form of primary fuel consumption from fossil fuels, nuclear and renewables. In other words, it’s as if all of the energy use currently going on everywhere in the world were multiplied by 30 and then, for an infinitesimal moment, blasted at a tiny capsule of fusion fuel, in an attempt to “ignite” a fusion reaction. This week, we’re going to talk about how this device was built, how it works, what it managed to achieve, and what it failed to achieve.
It’s worth pointing out, of course, that right from the beginning, NIF wasn’t *just* an inertial fusion device. In fact, to many — particularly some of those who funded it — its purpose wasn’t really to develop “clean, limitless nuclear energy” and all that jazz; instead, this was really about nuclear weapons. We’ve described before how inertial confinement fusion with lasers is, in essence, trying to create the smallest possible hydrogen bomb that you can make. After all, the principle of getting hydrogen nuclei to fuse by rapidly compressing them had already been demonstrated in H-bombs around the world: the problem was doing so in a controlled way that might allow you to harness the energy for peaceful purposes.
In the early 1990s, the world’s superpowers were negotiating to stop nuclear testing altogether. From the era of Reagan and Gorbachev, the aim was to prevent nuclear weapons testing from taking place in the future. The Comprehensive Test-Ban Treaty, eventually signed in 1996 after the Cold War had ended, has been mostly successful: India and Pakistan both tested nuclear weapons in 1998, and North Korea as a rogue state has carried out a series of nuclear tests in recent years. But compare this to what was done during the Cold War… when the US alone carried out an astonishing 1,054 nuclear tests, including explosions underwater, crashing planes carrying nukes to see if they’d go off. They launched nuclear weapons thousands of miles into the atmosphere to see if they could create signal-jamming radiation belts, Edward Teller’s beloved underground explosions to test the use of nukes for civil engineering purposes, and so on… the fact that the US has not had a single nuclear test since 1992 is therefore, by contrast, something of an achievement.
But this Test Ban treaty, while an achievement for peace, did present US national security hawks with a problem. Not only could they not rely on any new designs of nuclear weapons, which could no longer be tested, but they also couldn’t be sure that the existing arsenal would continue to work. After all, like any mechanical device, nuclear bombs can rust — and they also contain radioactive materials that decay over time. Without a great deal of data on how nuclear weapons age, it’s difficult to “mothball” them, come back in a century and be sure that they’ll still work — and, of course, in the MAD MAD world of nuclear theory, this could weaken your deterrent or your ability to massacre millions of innocent civilians at the push of a button.
So something called the “Science-Based Stockpile Stewardship Programme” was initiated, that would use fancy scientific testing to ensure that, yes, don’t worry, Armageddon was still only an irate President or a false alarm away.
The big question you’re probably asking is — how does NIF fit into this? Well, the precise answer to that — as you might expect from any project that involves billions of dollars of scientific funding — is a little controversial. It’s easy to see why having scientists study the nuclear weapons for signs of decay, occasionally replace the plutonium cores or maintain the stockpile, and examine the response of the materials to radiation exposure would be good nuclear weapons stewardship — but how does creating what is, essentially, the smallest possible nuclear explosion help you to confirm that your current weapons work well?
Some scientists, even at the time, said that the device was “worthless… it can’t be used to maintain the stockpile, period.” And the truth is that, if your aim was really to maintain the weapons stockpile, NIF would not really represent good value for money. For a start, although thousands of weapons — some of them many decades old — have been examined, few defects have been found, and many could be corrected simply by replacing components. The scientists argued that NIF was the only place on Earth that could create nuclear-weapons-like conditions without detonating nuclear weapons, and that: “The ability to study the behavior of matter and the transfer of energy and radiation under these conditions is key to understanding the basic physics of nuclear weapons and predicting their performance without underground nuclear testing.”
Well, that’s all well and good, but is it really necessary? After all: you know the weapons work; they don’t need to be tested; they just need to be maintained and their condition monitored. You don’t need a giant laser to do that. But what you *do* need a giant laser to do, as the scientists said to Congress, is to “train and retain expertise in weapons science and nuclear engineering without the need for further underground tests.” In other words, behind the thin veneer of maintaining the stockpile, NIF sold itself to Congress as a weapons research facility: the closest thing you could get to a nuclear testing programme without the nuclear testing. If the US ever decides that it wants to start designing new kinds of nuclear weapons, the scientists, science and expertise developed at NIF will be used for that purpose. People called this out at the time, especially in the magnetic confinement fusion community, who were smarting from budget cuts — a good example is Tom Collina’s 1997 article in MIT Tech Review — but it didn’t matter. Ultimately, the weapons enthusiasts would get their weapons facility, and the laser fusion enthusiasts would get their biggest-yet device.
You will of course remember from previous episodes that laser fusion had been tried a number of times before, in devices like Nova and Shiva. While the scientists’ understanding of how to control and manipulate powerful lasers had advanced — they were no longer burning holes in their optical equipment, for example — like their magnetic confinement fusion friends, they were discovering endlessly brand new ways that plasmas could refuse to behave. Instabilities like the Rayleigh-Taylor instability meant that tiny deviations from a uniform irradiation resulted in bumps in the plasma which quickly grew, eventually growing and becoming larger, bursting out from the plasma before ignition could be reached. Hot electrons carried away the energy that was supplied to the fuel, preventing the nuclei from getting hot enough to overcome their electrostatic repulsion and fuse together and release energy. The Nova and Shiva projects advanced our understanding, but ultimately failed to achieve breakeven.
NIF followed the standard, perhaps inevitable approach to correcting these problems: it promised to be 10x bigger than its predecessor, delivering ten times as much power. Uniform irradiation would be achieved using 192 separate beamlines, each focusing its laser pulse on a separate fraction of the sphere. The Shiva device, with 20 beamlines, had been named after the four-armed Hindu God — in that sense, NIF was Shiva on speed.
Naturally, making a device ten times larger than Nova, which had already cost $200m, entailed a hike in budget. In the early 1990s, when NIF was first hypothesised and designed, the initial projections for its budget suggested a cost of around $600m to build the device. As we shall see, this turned out to be another example of a budget that was a horrifying underestimate.
Making a device with this level of precision is extremely complicated. First off, as we’ve described before, the whole point of symmetrically irradiating the fuel capsule is to avoid instabilities: you need to create a spherical shock wave that will symmetrically compress the fuel capsule, heating and compressing its interior to astonishingly high densities and temperatures, for that inner segment of fuel to ignite and (hopefully) lead to a chain reaction that “burns” the rest of the pellet with fusion reactions before it’s blown outwards again. This means that every one of those 192 beamlines needs to be as close to identical as humanly possible — otherwise, you’ll get instabilities that arise from whichever one is misaligned, or delivering the wrong amount of energy, or firing a few billion-billionths of a second out of time.
There are an incredible number of things that can go wrong with this sort of device. To give you an idea, let’s briefly go over its design. First, there is a single, central flash of laser light from a single, coherent source. This is then split into 48 separate beams which pass through “Pre-Amplifier Modules”: they enter a laser cavity, where the beams bounce back and forth dozens of times through an illuminated crystal, picking up energy: here, they’re amplified from nanojoules to millijoules. Then, the lasers pass through a circuit four times, picking up more energy as they pass through another crystal filled with laser material, inducing the release of yet more photons: this boosts them to around six joules. Then, they pass through the main amplifier. This is essentially another crystal, which is illuminated with rapid flashes of light, exciting the electrons in the crystal into higher energy states: when the laser passes through, it triggers the decay of these excited states and the release of yet more photons, amplifying the laser pulses to a brief but powerful state that’s on the order of 4 megajoules. For comparison purposes, the laser now holds about as much energy as you’d get from exploding a kilogram of TNT, or the kinetic energy of half a dozen cars speeding down the motorway.
Then, you have the spatial filtering stage, where the lasers pass through a series of filters to ensure that they remain in a very sharp focus. After this point, the beams are then directed towards the target chamber, which is a steel sphere around 10m in diameter — but before they can get there, frequency conversion has to take place. The lasers used — predominantly the famous Neodynium crystals which do such a good job at this — produce highly monochromatic light at infrared frequencies, with a wavelength of around 1000nm, just outside the human visible range. But infrared light is, unfortunately, extremely good at heating up those undesirable hot electrons that we’ve talked about in previous episodes — the ones that absorb energy that are supposed to go to the nuclei, and then ricochet about and ruin the compression of the core.
So some thin sheets of potassium-dihydrogen-phosphate are put in the line of the beam. These effectively double the frequency of the photons, halving the wavelength. [Explaining precisely how this works is pretty difficult, but essentially what happens is the illuminating laser light interacts with the crystal itself, which produces a polarization in the crystal where negative and positive charges are forced apart. As the light passes through the crystal, this polarization also passes through the crystal, and hence it oscillates and produces its own electromagnetic wave at double the frequency of the initial laser light.] The first layer reduces the light’s wavelength from around 1000nm to around 500nm, and then the second layer recombines some of the 500nm light and 1000nm light through interference to produce light with a wavelength of around 350nm, in the ultraviolet range. By doing this, you inevitably lose some of the input energy — around half in most NIF experiments — but since you’re no longer strongly coupling to those hot electrons, you have a better chance of producing fusion in the capsule. The lack of good materials for producing UV lasers and the tendency of the high-frequency light to burn through certain crystals are probably part of why this has to be done — and, in fact, even for the short fraction of beam length where UV light is used, it’s necessary to constantly replace the optical components every 50 to 100 shots, which adds more expense to the project as a whole.
Finally, after all of this, the lasers are focused onto the target — which, for NIF, was usually a pellet of deuterium-tritium fuel around the size of a ball bearing in a gold hohlraum capsule around the size of your fingernail. In total, each of those beams travels around 1.5km to hit this target. So as you can probably see, a lot of things can go wrong.
And, unfortunately, all kinds of crazy things did begin to go wrong, which Charles Seife lists quite wonderfully in his “Sun In A Bottle” book. Mammoth bones were found when excavating the site. The head of NIF was forced to resign when it turned out that he’d never actually earned his PhD (although he had been an established scientist for some years.)
Other problems were more costly. For example, if there was a tiny speck of dust on the laser glass itself, it would instantly burst into flame when the laser was fired, which would in turn damage the class. Laser components therefore had to be assembled in extremely clean rooms, then carried around in robotic trucks with extraordinarily clean interiors — something which multiplied the cost of manufacturing the laser.
The main amplifier is powered by a huge bank of capacitors that can store 422MJ of energy for quick release and supply to the laser, in a similar way to the charged rotating flywheels that are used for energy storage at JET. But these capacitors had a nasty habit of exploding during construction: they needed to shield them with steel barriers.
Even manufacturing the fuel pellets is incredibly difficult. The fuel pellet is about 1mm in size, but in order to avoid those nasty bumpy Rayleigh-Taylor instabilities, you can’t have any deviation on its surface greater than around 50nm. Imagine a sphere the size of a car which can’t have a single grain of sand on it and you get the picture of how smooth this target has to be — manufacturing these pellets was extremely difficult.
The initial projected cost might have been around $1bn, and the initial aim was to finish the project in 2003 after ground was first broken in 1997. But it will come as no surprise to anyone with even a passing familiarity of complex, government-funded projects — let alone those with science as complicated as NIF and requiring entirely new technological innovations to be developed — that both of these turned out to be severe underestimates.
By 1999/2000, it became clear that these budget and timeline estimates weren’t going to fly, and there were several independent audits that essentially “rebaselined” the project, suggesting that a budget of more like $5bn and an operating time closer to 2008 would be more feasible for the project. As in other projects of its kind, some of the management were dismissed for making misleading statements to government officials about its progress. But, call it sunk cost fallacy if you will, the project was not abandoned at this point — although it drew on more funding from the Weapons Stockpile coffers, making it even more military in focus.
In 2003, the first lasers were fired at NIF — in just one of its 192 beamlines — and by 2009, the project was finally, officially completed. The site was dedicated in May 2009 at a ceremony attended by many thousands of people, including the Terminator himself who was Governor of California at the time, and the first scientific experiments began in June of that year.
Sometimes, after major fusion devices fail to achieve ignition, the PR machine whirs into gear and says “well, the aim wasn’t really to make *this* one achieve ignition, it was really about learning the fundamental science that will allow the *next* device to work.” Unfortunately, those at NIF can’t really make that claim, because they referred to their device as the “National Ignition Facility” and the main set of science experiments as “The National Ignition Campaign.” During the opening ceremony, a banner was unfurled: “Bringing Star Power To Earth.”
We’ve finally got to a bit of history I remember — I read articles about NIF in 2009 and 2010 when it was first switched on, and was excited by the prospect of fusion via giant death ray, as any nerdy teenager would be. And this was at the bottom of the economic crash, and just as fears around climate change were becoming more and more mainstream — there was a good deal of hype and interest in this device. Even the more sceptical popular science articles published at the time described the task of achieving ignition as “daunting”, and argued that it might take a year or two, and perhaps had a 50–50 chance of success.
The scientists themselves were more willing to be cautious — and, after all, which fusion scientist really wants to promise unambiguously that their device will do exactly as it was supposed to…
“I personally think it’s going to be a close call,” said William Happer to the New York Times, a physicist at Princeton University who directed federal energy research for the first President George Bush. “It’s a very complicated system, and you’re dependent on many things working right.” Dr. Happer said a big issue for NIF was achieving needed symmetries at minute scales. “There’s plenty of room,” he added, “for nasty surprises.”
Next episode, we’ll look exactly at what those nasty surprises were.
Nuclear Fusion: National Almost-Ignition Facility
When last we left the National Ignition Facility, the laser had just been switched on with some great degree of fanfare, aiming to start its ignition campaign. The banners at the opening day ceremony said “Bringing Star Power to Earht!” The scientists closer to the project said that “There’s plenty of room for nasty surprises.”
So what happened in NIF’s ignition campaign? Let’s find out.
Progress at NIF is limited by a few factors. First off, of course, there’s still the whiff of secrecy and classification around what is close to a weapons-research facility. There’s also the fact that you can only fire so many laser pulses — perhaps one or two a day. Essentially, this arises due to the fact that you have to allow the apparatus to cool down before it can be fired again, or risk melting vital components. At the height of the campaign, they achieved 57 shots a month — which, of course, means that data collection is inevitably slow. But improvements were being made — the first few test shots appeared to be just 0.1% of the way towards ignition, but over the course of a couple of years, this would be improved by a factor of 100.
Technically speaking — remember the triple product for plasmas, of confinement time, density, and temperature? We used it to explain the differences between the laser fusion effort and magnetic fusion. Magnetic fusion goes for long confinement at lower density and temperature, while laser fusion aims to briefly achieve much higher densities and temperatures. A measure of fusion performance comes from multiplying all three together. Well, the triple product achieved by NIF was higher than any tokamak. But since you basically need the triple product to be extremely high (as you’re only producing energy for a few milliseconds and therefore the total power output has to be very large to get a decent amount of energy), it’s not quite comparing apples with apples. And problems continued.
Tiny specks of dust ruined some shots. Others were ruined by an unexpected problem — the target chamber, which is necessarily as complete a vacuum as you can get, contained small amounts of water vapour: these froze to the capsule and resulted in asymmetric implosions, so the hohlraum had to be redesigned to incorporate some tiny layers of glass. One of the issues with NIF is a familiar issue to anyone who’s seen those Ghostbusters films — don’t cross the streams! If there’s any interference between or interaction between adjacent laserbeams, you’re no longer getting the nice symmetrical irradiation you were hoping for. By 2011, the DoE overviewer was already telling the press that “progress was not as fast as I had hoped.” However, the project still seemed to hang in the balance — as late as January 2012, the head of the laser fusion science at NIF, Mike Dunne, told a conference that “We are now in a position to say with some confidence that ignition will happen in the next 6–18 months.” And by July 5, 2012, NIF had produced its peak-power shot, delivering 500TW to the target chamber.
Unfortunately, that was also really the peak of the NIF’s ignition campaign. When it came time for a progress review towards ignition, published in mid-July 2012, the news was not altogether good. Quoting directly from that report:
“All observers note that the functionality ofthe laser; the quality of the diagnostics, optics and targets; and the operations by the NIC and NIF teams have all been outstanding. By comparison with the startup of other large science facilities, the commissioning and startup of experimental operation on NIF has demonstrated an “unprecedented level of quality and accomplishment according to one reviewer. Experiments on capsule compression, with improved diagnostic detail and exquisite laser pulse shape and energy control, have provided important insights into the details of the ignition capsule compression.
The integrated conclusion based on this extensive period of experimentation, however, is that considerable hurdles must be overcome to reach ignition or the goal of observing unequivocal alpha heating. Indeed the reviewers note that given the unknowns with the present “semiempirical” approach, the probability of ignition before the end of December is extremely low and even the goal of demonstrating unambiguous alpha heating is challenging.”
In other words, NIF’s lasers themselves worked as they were supposed to: it achieved all of its design specifications, and was even capable of delivering more energy to the chamber than had been initially planned. The experiments were running smoothly and new laser fusion science was being discovered and analysed. But the results demonstrated that the device was still a considerable way off achieving ignition. The alpha-heating referred to in the report was a key part of the mathematics of laser fusion — how laser fusion was supposed to work, in the plasma physics simulations that were being run at the time. When you compress the capsule, you do produce fusion reactions — this had been true for a long time — and these deuterium-tritium fusions produce alpha particles (plus the famous fusion neutrons). These alpha particles are produced with a great deal of kinetic energy, and they slow down by colliding primarily with the hot electrons in the plasma. For fusion to work, you need these hot alpha particles to pass on their energy to the remaining, un-fused material — enough to push that fuel over the fusion threshold itself. That’s what is really meant by “ignition”: the lasers provide the heating and compression, and the “spark” — and then a chain reaction begins as the fuel pellet begins to burn, causing the rest of the fuel to ignite. So it’s critical for the scientists at NIF to demonstrate that this process is going on, and seek to change the experiment design to get as much alpha heating as possible.
Instead, though, they found that they couldn’t even be sure that — at these temperatures and pressures — alpha heating was occurring at all.
The report also expressed grave concerns about the theoretical and computational modelling — the code used to predict how the plasmas would behave. This really is a familiar story, as much the same thing happened with the Nova device. In both cases, the simulations in advance had predicted that — well, if this device works, you’ll be well within the range of energies and densities where ignition can occur. The devices had worked: they had provided the correct amount of input power, but it wasn’t sufficient to allow for ignition to happen.
Ultimately, a lot of the problems the report identified were the same old laser fusion issues. The lasers were interacting with the plasma produced when they collided with the target, which could lead to non-uniform irradiation of that target. The targets themselves didn’t always implode with enough symmetry, and the scientists weren’t sure if this was due to laser-plasma interactions or due to something wrong with the target’s design. The theoretical predictions had suggested that 30–40% of the kinetic energy dumped on the plasma target would be converted into heat energy in the fuel hot-spot — instead, it was more like 10%, which meant that the pressures that were being produced fell short of ignition pressure by a factor of two or three. And the scientists at NIF, although they had designed an incredibly smooth target, still encountered Rayleigh-Taylor type instabilities which lead to less compression. Remember, these are the instabilities where you essentially try to compress squish a ball of jelly, only to find big globs of it passing straight through your fingers and not getting crushed at all. Also, essentially, the ablation material that compressed the fuel was mixing with the fuel, producing more uncertainty in the results.
The computer code, and our understanding of the physics of how inertial confinement fusion would work, was wrong — and, the report noted, this meant that they couldn’t really trust it to predict future experimental designs. Those behind the ignition campaign had hoped that, with a laser ten times more powerful, it might just be a simple matter of making gradual, “engineering tweaks” to the laser pulse shape, the capsule design, and so on — instead, it seemed that there was a great deal more about the physics of plasma instabilities to be understood. The scientists recommended new runs that weren’t aimed at achieving ignition right away, but instead “diagnostic runs” that would allow them to build up a better understanding of the physics of laser fusion. Of course, this means that in some sense NIF is a successful science experiment — it’s discovering and refining our understanding of plasma physics, and the next generation of theory and modelling will reflect that improved understanding. But NIF was almost sold as a prototype, the device that would achieve ignition, and there’s no way around it: as of 2019, it has not done so, and those that work there are admitting that it’s likely that it never will. The report in 2012 said that, “while the data obtained don’t exclude the possibility of ignition, it remains a considerable technical challenge with an uncertain outcome.” It was the first nail in the coffin for NIF’s dreams of ignition.
The US Congress was only willing to fund the pure ignition campaign until October 2012 — after that, the lab’s time would have to be increasingly shared between its weapons-related duties and further attempts at inertial confinement fusion.
Laser fusion experiments at NIF have continued, and continue to this day. The immediate effect was to refocus the efforts — not an engineering campaign bent on achieving ignition, but on a scientific campaign attempting to explain why ignition wasn’t working. Improvements arose. One notable set of pulses in 2014 produced a few more excitable headlines — and some misleading ones. In the 2014 experiment, a specially shaped laser pulse was used, in something the researchers dubbed as a “high-foot” laser pulse. This effectively involves hitting the laser with two pulses — an initial, very sharp pulse, followed swiftly by a slightly smaller one. According to our best understanding of laser fusion physics, this sacrifices the maximum possible energy gain for a greater degree of control over the capsule’s implosion — and the aim here was to slow down the growth of those Rayleigh-Taylor instabilities, those tendrils of plasma exploding outwards that made symmetrical compression so difficult to obtain.
What was good about this experiment was that the physicists were able to actually measure, with new diagnostics, the growth of the Rayleigh-Taylor instabilities — and they could confirm that the reason the Ignition Campaign shots had been unsuccessful was broadly down to the R-T instabilities. The high-foot pulse reduced these instabilities, but revealed — to quote one paper “additional instabilities that we had suspected may exist, but were previously swamped by the Rayleigh-Taylor signal.” Scientific progress was being made.
But NIF were a little bit naughty in how they reported that scientific progress — defining and reporting on a new milestone, which was called “fuel gain.” They reported that their latest runs had demonstrated “fuel gain”, a factor of around 1.2–1.9.
What this means, effectively, is that the energy produced by fusion reactions in these runs was greater than the amount of energy that was deposited into the hotspot in the capsule during the actual implosion process itself. This is, of course, a notable achievement — it’s something that no previous inertial confinement fusion experiment had achieved — and it’s demonstrating that the “alpha-heating” that was missing in 2012 has, by now, been observed at NIF. But in some ways, it’s also an accountancy trick, and a little confusing.
The definition of ignition is that the fuel should be heated faster by fusion reactions than it is cooled by losses, e.g. to radiation, hot electrons carrying energy away, etc. Once the fusion reaction is “ignited”, then you can take away the external laser ignition and the pellet will continue to “burn” and produce fusion reactions and energy on its own. Meanwhile, the definition of “breakeven” is a fusion reaction that releases as much energy through fusion reactions as is supplied to the fusion fuel.
So what was achieved in 2014, while impressive, was not ignition *or* breakeven. The energy that actually ends up getting supplied to the D-T fuel is only a fraction of the energy that’s deposited onto the capsule (remember, other parts of it go into heating up electrons or end up being radiated away), and this is only a fraction of the energy required to run the lasers (for example, we talked about how converting the laser’s frequency already involves losses of around half the power that’s supplied.)
The problem with the “fuel gain” is that, if you look at it quickly, or if you misreport it (as the scientific paper states in its first line that this isn’t ignition or breakeven), you might think that they “got more out than they put in” and that NIF therefore achieved its goal. But this is not true. Ignition has not yet been achieved in any laser fusion experiment on Earth, and neither has breakeven.
The reason I’m being snarky about this is just to point out how far we still have to go. Even in the highest-energy producing shot, the ratio of laser energy in to fusion energy out was less than 1%. Looking at ignition, rather than breakeven, experiments at NIF have pushed the plasma from achieving around 10% of the plasma pressure needed for ignition to around 30% of that needed for ignition.
Even NIF, which took over a decade to build and cost billions of dollars — for all its scientific achievements — is a long way off achieving the goals of fusion.
The problem with this kind of accountancy trick is that there are all kinds of different ways you can do the accountancy. My dear listeners, you have all been immersed in the world of fusion researchers and their jargon for a while now: your innocence is lost. But to an ordinary person, if you’re making a device that’s supposed to solve all of our energy needs, it needs to produce more energy than it requires to run. Not more energy than eventually makes it to the capsule; not more energy than is used in heating the plasma; but more energy than its *total use*. We talked about some of these problems in the Buzzkill episodes for Tokamaks — even once you have breakeven, for economic “engineering breakeven”, you have to convert that energy into electricity, where you’ll probably lose at least 2/3rds of it: you have to produce enough energy to cool the magnets as well as heat the plasma; you have to produce enough energy to make up for the downtime for the reactor, and to take care of the air-conditioning and the vacuum-pump and the general consumption of the whole power plant. There’s a lot of losses and a lot to take away from the system before you can even begin to call it a power plant.
And NIF suffers from similar problems. In the fuel gain shots, the lasers fired 1.8 million joules of energy at the holhraum. The fusion pellet eventually absorbs just 12,000 Joules of that energy, and it releases around 18,000 Joules, which gives you that fuel gain of 1.5. If you gave £1.8m to your financial advisor, and they gave you £18,000 in return — explaining that they wasted most of it, but the £12,000 they did end up investing was performing really well — you might wish you’d invested in solar panels instead.
And, of course, if you want to be a real buzzkill, it’s even worse than that, because only a small amount of the energy required to operate NIF ends up concentrated in that laser pulse. To amplify the laser, you need to — amongst other things — charge that bank of capacitors in the amplifier (remember, the ones that occasionally used to explode?). If you assume that the capacitors were fully charged for these shots, and that they dominate NIF’s total energy usage, they store 422MJ of energy. So fuel gain really is a very decent scientific achievement, but if all you see is “fusion energy out minus energy in”, you’re really only getting out 0.004% of the energy you put in. How large does that number have to be before you can really call it a feasible power plant, let alone an economically competitive one? Some conceptual studies suggest that you might need to get a fusion gain of a factor of around 100 for the reactor to be economically viable. NIF is a long way short of that. In fact, a laser design like NIF could simply never achieve this: too much is wasted along the way. Some estimate the maximum credible yield from a single shot at NIF to be around 45MJ — compared to the 400-odd MJ required to fire the damn laser in the first place. Ultimately, there are a lot of losses along the way in the laser system. Around 25% of the energy taken from the grid to power JET ends up heating the plasma — compared to maybe 0.00025% of the energy from the grid that’s used to power NIF. So comparing inertial confinement fusion “breakeven” with magnetic confinement fusion “breakeven” isn’t really fair. Breakeven is Q = 1. If JET runs at Q = 4, then its plasma is producing as much power as the tokamak takes from the grid. If it runs at Q=20, it might be economically feasible. NIF would need to run at Q=40,000 to produce as much energy as it takes from the grid. So it’s really not comparable. The reality is that either you need a much bigger laser producing much much greater power from the target– or you need a different design.
And remember the amount of practical burdens yet to overcome that laser fusion really hasn’t been able to think about yet. Once you produce this energy, how do you capture and harness it? How much heat generated by the fuel pellet can really be eventually captured and converted back into electricity? How are you going to fit an entire power plant on top of an incredibly large and delicate laser facility? And, given that NIF is currently only firing a single shot a day while its components are allowed to cool down and the next shot is prepared, how much energy could you feasibly produce with this kind of setup anyway? If your reactor can only fire once a day, you can hardly claim to be solving the “intermittent supply” problems associated with renewable energy.
In fact, if you wanted to achieve a power output of 500MW — which is about the size of an ordinary power plant, and about as much power as the ITER tokamak hopes to produce — some conceptual designs I’ve read suggest you’d need to fire the lasers approximately 50 times per second.
Now I don’t work on this stuff. I’m not a materials scientist. But the prospect of scaling this up to a point where you might call it a power plant… there are so many issues here. For a start, if the materials take hours to cool down before you can even think of another shot — how can you make optical crystals that won’t take as long to cool down? Unlike, say, the first wall in ITER — which is already a huge materials science challenge, because it has to withstand neutron bombardment — you can’t make these amplifiers out of anything: they have to be made out of materials that amplify lasers. Does such a material even exist? How often would it break under this kind of intense punishment, and how much downtime would your plant expect to have?
Then, of course, there’s the ability to actually fire the laser that often. You’re going to need to be able to supply power to the laser somehow. Can you charge up and discharge those capacitors 50 times a second, given that they sometimes explode? Given that they store megajoules of energy, charging them multiple times a second is going to require megawatts of power — you’ll need a whole different power plant that’s simply dedicated to producing energy to charge the lasers in your laser fusion power plant, which begs the question of whether it’s really worth building the insanely complicated fusion power plant in the first place. How long does it actually take to “charge and fire” the laser pulse — and what’s the minimum feasible time to do that in? In all honesty, it even seems like a bloody difficult engineering challenge to actually position the capsule in the centre of the vacuum that the laser beam is pointing at, 50x a second, when we’ve already seen that the tiniest possible grain of sand on the capsule or tiniest misalignment means that the whole shot fails to produce any energy at all. And if you want to get into the economics of it — those gold capsules and perfectly-spherical fuel pellets don’t come cheap. Some estimates suggest they might need to cost 25 cents to be economically feasible. They currently cost $25,000 to manufacture. Buying in bulk can only help so much.
So it’s clear. Even if NIF had achieved ignition, or you could make a really good case that its successor would achieve ignition… the challenges don’t end there — that’s where they really begin. Making laser fusion into a scientific reality is proving hard enough: we don’t know for sure that it can be done — making it into a commercial one is truly mindboggling. The scientists at NIF did briefly fund a project called LIFE, which was aimed at investigating the practicalities of harnessing fusion energy from NIF or its successors — but many pointed out that, given how far NIF was from achieving ignition, the practicalities of a laser fusion reactor design were still a long way off, and the project was quietly dropped a couple of years ago.
One of the more recent DoE reports is a little more stark about the prospects for NIF: “The question is if the NIF will be able to reach ignition in its current configuration and not when it will occur”, said the 2016 report.
Computer codes and models predicting high energy gain from the fuel capsules were “not capturing the necessary physics”, the review’s authors wrote. Experimental efforts were “frustrated by the inability to distinguish key differences” between laser shots, with similar set-ups producing scattered results. Most damningly, the review cited a “failed approach to scientific program management” based on “circumvent[ing] problems rather than understanding and addressing them directly”.
Moreover, the report, while being clear that scientific progress has been made, implies that there’s still a long way to go to understand the fiendishly complicated physics of laser-capsule interactions: the computer modelling and theory that lies behind these interactions are still not considered to be sufficiently reliable to predict how a totally new design will behave, and the report cautions that scientists must “prepare for the possibility that there is no existing experimental setup that will achieve ignition.”
Naturally, there is various associated sniping. Some people who predicted as far back as the early 1990s that NIF could not succeed have spent the best part of their careers saying the same thing. And there’s disagreement in the scientific community about where to go next. Some think that the instabilities can be overcome by using bigger capsules — but these might in turn require a bigger laser to drive them. Others think that a new kind of laser, one based on krypton fluoride gas rather than solid crystals, will provide a smoother beam and a smoother route to ignition. Some further experiments are considering using thin layers of diamond or beryllium rather than plastic as the outer layer of the capsule to compress the fuel.
Of course, NIF is far from the only inertial confinement fusion project out there, and far from the only large laser facility. There are dozens of other concepts and ideas that exist and that are being tested throughout the world, and we’ll come back to some of them perhaps when we talk about modern-day startups that are trying to get fusion to work. Perhaps the route with the most hype in recent years is something called “Fast Ignition”, which effectively separates the stages where compression and heating occur. First, the pellet is compressed an initial, long pulse of lasers: then, at the point of maximum density, it’s blasted with another ultra-high powerful laser beam that delivers a large amount of energy directly to the hot-spot in the compressed core. The idea is that this laser actually bores straight through the outer layers of plasma to that hot core — a technique referred to as “plasma bore-through.”
There was some considerable hype around this, especially because it gets you around some of the main problems associated with making an indirect-drive design like NIF practical. After all, one of the problems with NIF is that only a tiny fraction of the laser energy is actually supplied to that central hotspot, which is why “scientific breakeven” is so far from engineering breakeven, and why you have to fire your laser fusion financial advisor. With this method, you hope to pour a great deal of laser energy directly into the hotspot at its hottest — and thus, overall, reduce the power of the laser that’s required. There were plans to build a big facility called HIPER in the EU that would perform this.
It’s actually kind of difficult to find out much about the fate of HIPER. I can find plenty of rusty old slideshows suggesting that it would have been built by 2015, referring to the “imminent ignition at NIF”, so that’s obviously not the plan any more. The most recent documentation I could find was from 2014, where the report on the “Preparatory Phase” was still talking about NIF achieving ignition within a couple of years. As far as I can tell, HIPER still only exists on paper, and has done for more than a decade now — although there are still certainly people at laser facilities looking at “fast ignition” and other techniques.
China, for example, has its own inertial confinement fusion project going at a particular laser facility — and a recent paper even claimed that they might achieve ignition by 2020 — but honestly, I think you’ll agree by now that we should believe that when we see it. There’s also the Laser Megajoule facility in France, and scientists in Japan working on fast ignition techniques.
Lots of inertial confinement fusion advocates point out that we know it will work, because it works in nuclear bombs and it worked in the underground “Halite/Centurion” tests — one can, apparently, indeed get energy out of a D-T pellet in this way. We just don’t know precisely what the lower threshold for ignition might be. The lower threshold for the Halite tests was basically set by the smallest-yield bomb that they used. Some say a laser 10x the size of NIF might achieve it; others suggest a laser 100x times the size of NIF is needed. The results from Halite-Centurion were obviously promising as the US immediately began to focus on ICF — but they’re classified: how can we know whether we’re really close, or if it’s just wishful thinking?
But I’m not even sure that we’ve established that this can really work on a scale smaller than the smallest nuclear bomb yet developed. Overcoming these instabilities remains an extremely difficult challenge. Converting this into a practical power plant will require even higher energy gains if you need a 10MJ or 100MJ laser. In the 1970s and 1980s, it had appeared to be a shortcut to fusion — a faster, smarter route than all of this mucking around with tokamaks and magnetic fields — one enabled by the invention of a new technology. But, like the magnetic confinement fusion teams before them, they found that plasmas were capable of behaving in strange and unpleasant ways far more often than one might have hoped.
Ultimately, the failure of NIF has likely proved very damaging for the future of inertial confinement fusion. The inertial confinement fusion efforts were mostly focused in the United States, as magnetic confinement was increasingly abandoned, and never had quite the same level of international collaboration as JET and ITER did. When you have one big, highly-publicised experiment that’s aiming to achieve a certain goal, and then it doesn’t succeed, it’s hard to see how it didn’t set the project back by decades. In my mind, the failure of NIF to achieve ignition is similar to what might happen if ITER failed to reach breakeven — or, perhaps, what happened when JET failed to achieve breakeven. It really has a massive chilling effect on the whole field — for new scientists, for established scientists, and for funding bodies. There are a great number of people doing excellent, ground-breaking work on laser fusion — I’ve met many of them — but it’s clear that we still have an awfully long way to go.
One huge project ultimately didn’t succeed in its goal — and it would take decades until anyone would be willing or able to fund or construct another one in the same field. I mean — JET started operations in 1984, and ITER might not start until 2025. Could we see a gap that long between big experiments for ICF? With so many technical challenges still to overcome, it’s really difficult to see how anyone is going to be persuaded to build a set of lasers that’s ten times bigger than NIF, at the cost of tens of billions of dollars. If inertial confinement fusion is going to work, it will need some new, unforeseen development or breakthrough, possibly using a quite different approach to that NIF has used so far. I won’t rule out such a breakthrough: it would be foolish to do that. But I wouldn’t bet my life savings on it, either. If nothing else, at least NIF can always boast that part of Star Trek: Into Darkness was filmed there. (The target chamber stood in for the warp core of the Starship Enterprise.)
As far as I can tell, here in 2019 as I write this, NIF still conducts inertial confinement fusion experiments, testing out different capsule design and different laser parameters — but most of their recent output is focused on weapons testing and design. There has been some talk of trying to reconfigure NIF — instead of blasting the target via a hohlraum, which is the metal container that produces the symmetric X-rays, blasting the target directly with direct drive. Direct drive has its advocates, and maybe some tests will be carried out using it at NIF, but it’s not what the laser was originally designed for. NIF’s lasers were not designed to be perfectly spherically symmetric, but instead to uniformly illuminate the holhraum, so a major reconfiguration would be needed. And the best calculations available at the moment suggest that, in direct drive mode, the laser might still only get 50% of the way to ignition. Given that the indirect drive calculations suggested ignition was within reach — only to find that it was more like 10% of the way there — maybe the direct drive approach wouldn’t get that far.
Some other more recent experiments have demonstrated that, if you use liquid hydrogen — which can form a more perfect sphere in vacuum, filling in bumps and imperfections — surrounded by a thin layer of foam — and turn the power of the laser down, so that the instabilities are less excited, you can achieve higher pressures and densities than before. In other words, it’s basically easier to achieve maximum compression with the laser turned up to 5 than it is to achieve half compression with the laser turned up to 11. It’s not going to get them near to their original goal of ignition, but under these conditions, the plasma theory and modelling that exists so far works quite well — which means they can seek to optimise it some. So work done on inertial confinement fusion at NIF might still guide us towards inertial confinement fusion in the future: it’s not a hopeless endeavour — but it’s tricky to see funding for a NIF successor on the horizon any time soon.
More recently, NIF has also begun to “rent itself out” to various different groups of scientists with interest in access to the world’s most powerful laser facility, for materials science under extreme conditions. There is some amazing science being done at NIF: I mean, it’s the most powerful laser on the planet, capable of creating hot and dense conditions that can be found nowhere else on Earth. One of the results that caught my eye was their use of NIF to compress hydrogen down into its metallic form, and study its properties. And, of course, we now have a greater understanding of the physics of inertial confinement fusion. The best science available when it was constructed suggested that it might work — and I’d always argue for spending on science, even the big flashy experiments, over plenty of other things that governments choose to spend their money on (like, for example, nuclear weapons.) But it is unavoidable, inescapable to point out: NIF aimed to achieve ignition. It failed. It may leave behind a legacy of awesome science and, who knows, it may even be one of those stepping-stones along the road to fusion in the future. But in its primary objective, it did not succeed. That will always, too, be part of the legacy.
Next time, then, we’ll talk about the “surviving” major international fusion experiment. It’s been a while on our show since the USSR and the USA agreed in principle to a huge, international collaboration on the world’s biggest tokamak — a truly global scientific endeavour, with the aim of liberating nuclear fusion for the world. And still, today, as I write this, the device is being constructed in the South of France — and still, many are heralding and hyping it as the route towards finally making fusion power a reality. So, um, how’s that working out? The next episodes of the show will take us right up to the present day — and into the future.
ITER, show us the way!
Thanks for listening etc.
Nuclear Fusion: ITER’S Challenge
Hello, and welcome to Physical Attraction. This episode, in our fusion odyssey, we’re going to talk about ITER.
Where we last left magnetic confinement fusion, we were talking about the achievements of JET and other tokamaks around the world — getting closer than ever to breakeven, and discovering new modes of operation (such as the “H-mode”) that improved plasma confinement. The ITER (International Thermonuclear Experimental Reactor) project will take the story of tokamak fusion reactors right up to the present day — as I write this, you can still get daily updates on its construction, and the machine is due to begin fusion experimenting in earnest in 2025. In the grand scheme dreamt up by today’s fusion glitterati, ITER is intended to be the last purely experimental magnetic confinement fusion that will ever be built. The plasma in ITER will undergo ignition and sustained burn, and it will generate 10x as much power as is supplied to the plasma — demonstrating once and for all that magnetic confinement fusion can be used to release energy. Studies from ITER will then determine the practicalities and necessary design specification of the first ever magnetic confinement fusion power plant — that attempts to overcome all of the various barriers, problems and inconveniences with actually using the energy released by fusion in a practical power plant. And, perhaps, on this timeline, by 2050 or 2060, fusion power might finally supply energy to the grid — and begin the long, arduous process of learning to be cost-competitive with the alternatives that exist out there right now. Because ITER is the plant that is supposed to demonstrate the feasibility of fusion with tokamaks — and because it has taken so long to build, and cost so much, and required such a large international collaboration — the stakes for ITER have always felt very high. If ITER, like NIF before it, is judged to have perhaps failed in its primary scientific goal of achieving plasma ignition and breakeven, then it’s incredibly difficult to see how another unprecedentedly large tokamak gets built. That doesn’t, of course, mean the end of fusion power, or efforts to pursue it, or even science in tokamaks — but it’s difficult to imagine how chilling the effect would be, and, in all honesty, we’d then be relying on something unexpected to come out of a startup or a national fusion project like the ones in China to have any prayer of realising fusion power on the grid within our lifetimes.
In this episode, we’re going to talk about the history of the ITER project, and some of the challenges that it faced and will continue to face in the future.
On this show, we’ve talked before — particularly in some of the Nobel Prize episodes — about how science, especially experimental science, is becoming a huge, collaborative endeavour. In many ways this is the price of progress: it’s much more difficult for individuals to do truly groundbreaking research in their back gardens simply because of how much we’ve explored. In that sense, it’s possible that ITER is the world’s biggest science experiment: 35 countries, representing over half the world’s population, from the European Union, the US, China, India, Japan, South Korea, Russia etc. and a budget for construction that’s looming currently at around $22bn (and really the only way is up) — perhaps only the Large Hadron Collider or the International Space Station really compares in terms of the vast amount of effort put into such a singular project.
How does ITER compare to JET, the most successful tokamak to date? The radius of ITER’s central donut torus — where the plasma is contained — is expected to be 6.2m compared to JET’s 3m. It will contain 10x as much plasma as JET did. It aims to multiply the power supplied to its plasma by a factor of 10 compared to JET’s 0.7. It will supply 50MW of heating to its plasma compared to 26MW at JET. They’re aiming to confine the plasma in H-mode for more than 5 minutes, compared to JET’s record confinement time of 20 seconds or so. And the current that the ITER tokamak will run through the plasma, to pinch and compress it and prevent it drifting to the edges of the donut in the magnetic fields, will be 17 million amps compared to 7 million.
The central device may only be 6m in radius, but don’t let that fool you — there’s an enormous amount of auxiliary stuff, including the heating mechanisms, the superconducting magnets and their cooling apparatus, the first wall to protect these components from damage, and so on — such that the whole ITER site will cover nearly a square mile in the South of France when everything is constructed. The wires for the external superconducting magnet coils will stretch out for 100,000 kilometers — that’s enough to wrap all the way around the Earth twice and still have some left over. The vaccum vessel that contains the central tokamak will be 11m tall and 20m in diameter, and it will weigh around 18,000 tonnes. In other words, the device is truly monumental in size. A large part of this is to ensure that the confinement time is high enough — given that confinement time is limited by instabilities and turbulence, when these cannot be reduced further using different plasma configurations, one way to improve confinement time is to create a larger device with more plasma so that the energy cannot escape as quickly — and that’s a large part of how ITER improves on JET. Seeing why this works is fairly simple, too. Ultimately, the fusion power that’s generated depends on the volume of the fusing plasma, while the ability of energy to escape depends on its surface area and its edges — that’s where radiation and bursts of particles are ultimately escaping from. The physics — things like the temperature, the density, and the external magnetic fields — sets the rate at which power is generated in the plasma’s bulk, and escapes via the surface area. So, naively, if you scale things up, that surface area to volume ratio will always get smaller, and you’ll always get closer to Q=1 and breakeven. This is why tokamaks and fusion devices have been getting progressively bigger since the cute, tabletop devices we first discussed in the 50s and 60s — but of course, it brings with it a whole host of new challenges.
Some of the challenges associated with ITER involve how to drive that 50MW of power into the plasma in the first place. In previous episodes, we talked about neutral beam injection as a method of heating the plasma — you accelerate ions to high energies with an electric field, then you pass them through a nice cloud of electrons where they can pick up another electron and become atoms again. These neutral atoms then don’t destabilise the plasma (as much as they would if they had electric fields, anyway) and they crash into the plasma ions, passing on their kinetic energy and thus heating up the plasma (and also helping to drive the plasma current, because you generally shoot them in the direction of the plasma current so that the ions in the plasma will preferentially move in that direction.)
The problem with ITER is that it’s so big, and has such a large volume of plasma, that you need to shoot these particles in at very high energies so that they can even get to the centre of the plasma, which is where you really want to heat it. The energy they need is around 1MeV (mega-electron volt) so that they can penetrate deep enough into the plasma — but at these energies it’s extremely difficult to make them neutral, as they’re too hot to easily recombine with the electrons again. So ITER’s neutral beam heating goes the other way — first, you add an extra electron to the atom; then it’s negatively charged, and you accelerate it to high velocities — then you knock the extra electron off, leaving fast-moving, neutral atoms that can heat the centre of the plasma. This has never been done on any previous tokamak, so it’s brand-new technology, but lots of progress has been made in developing it. Further heating comes from electromagnetic waves, which blast the plasma at its cyclotron frequency, causing the ions and electrons to resonate and pick up energy from the electromagnetic waves — gradually heating them up again. To do this, you need big, powerful electrical antenna — just another example of the auxiliary apparatus needed to make the tokamak work.
Even with this level of heating, it’s still not entirely clear whether this will be enough to drive the plasma into the H-mode, where the damaging problems of turbulence are suppressed. Try as they might, no one has been able to come up with a closed theory of why H-mode plasmas exist, even though the results have been reproduced under a range of conditions at tokamaks all around the world. We simply don’t have the equations. Huge amounts of theoretical effort and experimental research at existing tokamaks have gone into trying to reduce this uncertainty, to ensure that ITER’s design would be sufficient to reach H-mode and to understand why it happens, but it still remains possible that ITER may not get there.
Similarly, the Divertor in the ITER tokamak has to do things that no one has ever done before. A quick reminder of what the divertor is: “Situated at the bottom of the vacuum vessel, the divertor extracts heat and hot particles produced by the fusion reaction, minimizes plasma contamination, and protects the surrounding walls from thermal and neutron loads.”
Since the divertor is essentially having to handle the power that’s generated by ITER, alongside being bombarded by a huge flux of radioactivity and neutrons, the system needs to be incredibly tough. In fact, just looking at the heat alone — around 20MW of power is going to be deposited on every square metre of the divertor. That’s 10–20x higher than the amount of heating for space capsules re-entering the Earth’s atmosphere, and we’ve all seen things that burn up on re-entry. The diverter will essentially be smashed with the most sustained heat and the most sustained neutron radiation of pretty much anything that’s ever existed on Earth. So creating something that can do this is a huge materials challenge in itself. The plate is made of tungsten, with the highest melting point of any metal. It’s got a sophisticated system that pumps large amounts of coolant into the divertor. It’s positioned at an angle, and slight tweaks to the magnetic field are used, in order to spread the power out over a larger area. But there’s no getting around the fact — the divertor in ITER is going to have to withstand some unprecedented punishment without melting, and it’s a key area of research for the machine to actually perform as planned.
Another area that needs to be incredibly robust is the first wall of the tokamak. Remember, this is the interior of the vessel that is supposed to protect the delicate superconducting magnets, and the rest of the apparatus, and people in general, from the extraordinary heat and neutron flux — the neutrons that turn pretty much anything they hit into brittle, radioactive waste. And those central superconducting magnets need to be *cold*. If they’re not kept below -267K, then they stop becoming superconducting — and if that happens, then you suddenly have resistance in a magnetic coil that’s carrying a truly astonishing amount of energy. According to Khatchadorian, one scientist compared the impact of this to multiple jet aeroplanes crashing into the machine. If the first wall isn’t up to the job, or there’s some other catastrophic failure, you could severely damage the reactor and knock it out of commission for months.
We’ve already discussed how difficult it is to test materials against this kind of punishment because producing neutron radiation of this kind and with this energy is extremely difficult without actually having a full-scale fusion reactor. Under normal operation, the first wall should at least be less irradiated than the divertor, as — after all — power is supposed to be diverted through there.
One problem is — when the plasmas are operated in the stable “H-mode”, we see Edge Localized Modes where little bursts of particles and plasma smash out of the edges of the plasma’s toroid and slam into parts of the first wall. These will need to be suppressed if possible. One of the ways that ITER hopes to do this is to spray them with frozen pellets of argon, neon etc. which will effectively radiate away the energy from the ELM and cool it down before it can burst out in this fashion — but again, this is a new technology that’s still being tested.
But disruptions — those sudden losses of stability that cause the plasma to burst out in all directions — could really spell disaster for ITER. The plasma is easily large enough to produce enough energy to melt part of the first wall if there are disruptions. So if disruptions become a regular feature of ITER operation, then it could be in serious trouble. If it only takes 3–4 disruptions before you need to repair the tokamak, then you can easily see that the frequency of these disruptions is going to be crucial in how practical any kind of fusion power plant is. After all, disruptions occur several times a day in machines like JET when they are running at full experimental capacity. ITER’s website — where things are usually quite glossy — suggests that ITER is built to withstand disruptions in up to 10% of its plasma pulses.
If you need to replace the first wall — or worse, some other valuable component — every few days when you’re running the machine, it will struggle to produce power and it may never be economically viable (even if you think that it can be today…)
Again, lots of research is going into ways of predicting and preventing disruptions — if one looks like it’s on the way, ITER will probably douse it with lots of these frozen atomic pellets which will then act like impurities, radiate away the energy, and prevent the explosive disruption from occurring. But the ability to predict and prevent these miniature explosions of plasma, especially when the mechanics of them are not entirely understood, is going to be both crucial and tricky. This is part of why ITER’s timeline is that it will first produce plasma in 2021, but might take years after this to operate at peak performance — this was also true for JET, which opened in 1985 but didn’t set its records until 1997. Initially, for fear of damaging the machine, ITER will have to be operated conservatively until disruptions are under control. I imagine they will probably gradually jack up its performance until the disruption rate becomes the limiting factor. And, ultimately, this is a new realm for plasma properties: that’s the whole point of it. If it turns out that disruptions are more likely in ITER than they are at JET, or more damaging, the whole project will become about fixing that problem.
Choosing what to make the first wall out of is also a difficult balancing act. Inevitably, parts of the first wall will erode and end up contaminating the plasma. We’ve talked about how atoms with lots of electrons act like efficient radiators — impurities that just radiate away an awful lot of energy because their electrons can be excited, ionized, recombine, de-excited, releasing photons of radiation along the way. So while you might hope to make the first wall out of tungsten again, it’s not actually ideal, because it has so many electrons — 74 per atom, as a heavy metal — that if the first wall contaminated the plasma, all your heating would be lost. So beryllium is chosen as a compromise.
This adds its own problems, of course. Beryllium is element number 4 — just after Hydrogen, Helium and Lithium. The reason you don’t think about it much is that it’s extremely rare: it exists in perhaps 2–6 parts per million in Earth’s crust. Rather than being formed directly by fusion in stars, like the more abundant elements, lots of Beryllium isotopes are unstable. Beryllium that does exist on Earth is mostly formed when other elements are bashed by cosmic rays or other types of radiation. This is probably a good thing that it’s so rare, because it happens to be toxic to humans — beryllium dust in your lungs will screw them up badly and thousands of people who worked with beryllium in the early days, in the 1950s and 1960s, have been left with permanent lung damage or even killed.
Evidently, like much of maintenance for the tokamak — which is so highly radioactive that you don’t want humans going anywhere near it anyway — the exposure to beryllium will be done with robots and remote handling, as it’s done in JET at the moment. Beryllium, with only four electrons, will radiate away the heating in ITER less, and its melting point of 1100C is still pretty high. Beryllium has another, strange advantage, too — it produces more neutrons when it’s bombarded with neutrons. And this is important for the next major thing that ITER is supposed to be testing.
As we’ve discussed, ITER and JET set records by fusing deuterium and tritium together. But tritium is very rare, and currently retails for around $30,000 per gram — with a half-life of 12 years, it’s a difficult fuel to find, store, and mine. So the aim is to create “tritium breeding” reactors: a blanket of lithium, just beyond the first wall, is going to be bombarded with neutrons. This will, in turn, create tritium, which they can scrape from the inside of ITER alongside whichever tritium wasn’t burned in the initial reactions — and hopefully, then, your power plant is a little bit more self-sustaining, creating some of its own fuel.
The central solenoid — that central coil of superconducting niobium-tin wire that passes through the torus and helps generate the current in the plasma — is an incredible thing all by itself. There are six modules of wire stacked on top of each other which can run opposing magnetic fields, allowing the magnetic field within the tokamak to be shaped in different ways — but this also means that, when opposing fields are run through the coil of wire, there’s a tremendous force that pushes the coils apart that you have to deal with. This force could be up to 60 meganewtons. By comparison, that’s twice the force that a space shuttle requires to take off. Niobium-tin is a superconducting wire — when cooled, it has zero resistance and can create astonishingly high magnetic fields as you need for ITER to work. But this also makes it delicate and fragile. It cannot withstand much punishment from the neutrons produced by the fusion reactions. Once drawn into wire, it must be baked in a furnace to become superconducting. And, with each pulse, the material fractures slightly. There’s a tradeoff. The central coil will last longer if it’s fired with less energy — but a certain energy is required for the plasma current to reach its goal. It’s designed to run for 60,000 pulses over the lifetime of ITER. According to ITER’s website, each one of these wire modules takes around two years to manufacture — and they’re currently building just one spare alongside the six original modules. Obviously, if something happened that rendered them inoperable, it could take years to get ITER running again.
ITER is an experimental reactor — that means it has a hell of a lot of diagnostics. The central “brain” of ITER, a computer called CODAC, will be processing data from 120,000 different sources to try to monitor and diagnose the plasma performance as it continues — to understand the physics of the fusion reactor, and adjust its design in real time to feedback against changes to the plasma if it can.
So what is ITER supposed to accomplish exactly? We’ll talk about its design later on, but the key goals are as follows:
They want to attain a power multiplication factor, Q, of 10 or more for sustained bursts of five minutes. That is to say, for the 50MW that goes into heating the plasma, ITER aims to generate 500MW of power from thermonuclear fusion. As we’ve discussed, of course, this power won’t be harnessed, and it’s also not really a net energy gain when you take into account the energy losses for the building, in the heating process, and in running the magnets. ITER won’t produce net energy, but it is designed to exceed and surpass that scientific breakeven, while JET only managed a Q of around 0.7. This is 30x more power than any tokamak has generated, and the confinement time for the plasma is 100x as long — still a way off what a power plant would need to do to be feasible, but orders of magnitude better than previous tokamaks.
Its secondary aim is to achieve a Q of greater than 5 when running in steady state. To get Q up to 10, the plasma current is generated in “pulses” using electromagnetic induction — but this can’t be used in the steady state. Using neutral beam injection to drive the current — as well as the so-called “bootstrap plasma current” that forms when the plasma current is self-reinforcing — you can sustain the plasma for a much longer time. This is where ITER is aiming to simulate what a real power plant might be like — operating continuously for as long as possible — and the designers are aiming to demonstrate that it can achieve this in the steady state. Of course, it’s in these steady state runs that the divertor and the first wall are going to be most challenged.
ITER is also hoping to achieve a “burning” plasma — in a Q > 10 plasma, you imagine that a good deal of the energy to heat the plasma will come from the thermonuclear reactions themselves. This might not be enough to “ignite” the plasma — in other words, if you took away the external heating altogether, the fusion would probably still cease — but it does demonstrate that the plasma might be capable of a self-sustaining, controlled reaction. Ignition would be nice, and maybe if the plasma happens to work better than expected it might just be possible with some specific iteration — but it’s not a primary goal of the ITER device, which is first and foremost hoping to break even and produce a power gain. A burning plasma is likely to be different to the plasmas that currently exist. For a start — if you get lots of deuterium-tritium fusion going, that’s good, but it also means that for the first time your plasma will have significant amounts of fused nuclei. The product of that fusion — helium-4 nuclei, or alpha particles — will change the behaviour of the plasma again. In fact, as they hoped would happen for NIF, these helium-4 nuclei should end up being the dominant source of energy to heat the plasma in future — so a great deal of the plasma physics effort at ITER will be trying to understand how this new, burning plasma actually operates.
ITER is also hoping to demonstrate that all of the various technologies — the superconducting magnets, the vaccum system, the heating, the divertor etc — can work together. And it’s also there to test out the various tritium-breeding schemes that have been proposed to recover tritium from the device, so that the reactor is self-sufficient in terms of fuel — and to test out the safety characteristics of a fusion reactor.
So what you can probably see already is that there are a number of potential ways that ITER could demonstrate real challenges. What if some new plasma instability is discovered? What if disruptions are more common than we think, or if they prove to be more damaging to ITER’s internal components, or if they are harder to stop than we hope? What if the heating system fails to deliver that 50MW of power to the central plasma? What if the divertor or the first wall can’t stand up to the conditions of the inside of the reactor — or, at any rate, they need to be replaced so often that getting practical power out of the device looks impractical without major advances? And what if tritium breeding is unsuccessful, and the attempts to retrieve extra fuel from the reactor don’t succeed? I don’t think any of these questions have yet been answered — which is why it’s going to be fascinating to see how each of the aspects of the tokamak perform, and why dozens of smart and dedicated people are working on solving each problem in various countries around the world.
Next episode, we’ll talk a little more about ITER’s history so far, the politics surrounding the project, and its prospects for the future.
Nuclear Fusion: Is ITER “The Way”?
Let’s talk about ITER’s history. The idea behind this project really began in 1973, when USSR head Leonid Breshnev met Nixon at the height of the OPEC oil crisis. They agreed that fusion energy collaboration might ease cold war tensions, and with the EU and Japan, the INTOR (International Tokamak Reactor) project began — really just a set of meetings to discuss what might be needed for another project. In 1985, when Reagan and Gorbachev met, it was decided to actually advance this towards an engineering and design phase. In 1986, the US, the EU, the Soviet Union, and Japan signed the ITER agreement. The initial design of ITER was more ambitious than the version that’s being constructed today — it was aimed to produce 1.5GW of power, closer to what the second-generation DEMO is supposed to produce, and to ignite the plasma as well as being the first machine to produce more energy than is supplied in heating. By 1988, conceptual designs for the plant were underway. In 1993, people were expecting to have the device ready and built by 2010. Come now, my dear, dear listener. You and I are too old to dance. You know how that went.
Optional from Raffi Khatchadorian’s 2014 New Yorker profile of ITER.
“In those early years, iter had — for the only time in its history — a single visionary at its helm: a French physicist named Paul-Henri Rebut. Balding, with intense eyes darting behind large glasses, Rebut had designed jet, a widely praised machine with a vacuum chamber big enough to walk through. Some colleagues referred to him as a genius; he could attend to engineering obstacles with extreme focus, and was able to visualize simple solutions for intricate problems. At jet, Rebut wandered the halls of the design office at night — he thought more clearly while pacing — and sometimes he went from workstation to workstation, penning corrections or x-ing out whole ideas. “He could be brutal,” Chiocchio recalled. “But he was very, very clever.”
Once Rebut had agreed to take charge of iter, he moved with characteristic boldness. For years, in various workshops, a conceptual design had been sketched out for a dual-purpose machine that was partly an experiment to prove fusion’s feasibility and partly a prototype for a commercial reactor. Rebut tossed out the design and replaced it with his own: a gargantuan device, in effect a full prototype. In his mind, fusion was already feasible — and, as he had once explained, “There is a general tendency not to be harsh enough in this field and to go too slowly, not to make the necessary step large enough.” He envisioned a vacuum vessel seventy-two feet in diameter. Its plasma would produce a gigawatt, or a billion watts, possibly more, and run for a thousand seconds. He saw no point in the massive global effort without chasing the ultimate goal: ignition.
At that time, iter had no formal organization. “All of us were basically assigned to this international team from our own countries,” Chiocchio recalled. Three offices were opened: one in Garching, Germany, where components inside the vacuum chamber were being worked on; another in Naka, Japan, which concentrated mostly on magnets; and a design center in San Diego, where Rebut was based. Chiocchio worked in Germany, but he sometimes flew to see Rebut. “I remember he had a chair with wheels, and was rolling among the workstations of the designers,” he recalled. “Rebut himself was the integrator. We were sending them faxes every evening, and they were sending us responses by fax every morning. We were joking, this is design by ‘strategic fax.’ But the approach was not entirely entropic. It had an advantage. Instead of working eight hours a day, we were working sixteen.”
The design was extremely elastic: features shifted continually in relation to other features that were also shifting. “The team was not so big, so we knew each other well,” Chiocchio said. Working at the conceptual level — without worrying over fine details — they could grasp what colleagues in other divisions were doing. The plasma was constantly exerting new and unforeseen forces, which the iter engineers struggled to measure and to incorporate into their designs. “The mentality of fission is that there is a systematic process — you define your loads, your criteria, and then you produce a design,” Chiocchio told me. “At the beginning, at iter, sometimes I would ask my boss, ‘Can you tell me what the main requirements are for this component?’ And he would say, ‘What are you talking about? Try to find a solution.’ It was a bit more of a, let’s say, creative engineering environment.”
Rebut himself did not bother documenting the requirements. This was information that he kept easily in his head. An American representative urged him to work in a more standardized way, but he refused. The design was growing in scale and cost, and Rebut’s intuitive style and unwillingness to engage in basic diplomacy began to work against him. In 1994, the United States succeeded in having him removed. As it was not Rebut’s way to leave subtly, he went to Congress, and argued that the iter organization had insufficient legal authority, insufficient independent funding, and, perhaps worst of all, a leadership of incompetent bureaucrats. By focussing on consensus, he argued, the parties made decisions based on the lowest common denominator. The representatives assigned to the iter Council were “more concerned with the work awarded to each home team than by the success of the engineering design activity.” If things did not change, Rebut predicted, the machine would never succeed.”
But things quickly got a little bit out of hand. Essentially, while everyone loved the idea of an international fusion reactor collaboration, it posed its own problems. For a start, where do you put it? Wherever you put it, that’s where the majority of jobs will be created, and where the majority of the economic benefit will be felt. Scientific talent and funding would, ultimately, flow from some of the ITER countries towards whichever country eventually managed to host the thing. Meanwhile, scientists in individual countries were aware that a big part of their government’s fusion budget would be sucked in towards ITER. If you’re working on stellarators, or different types of smaller tokamak, or inertial fusion, or anything else, you clearly want to oppose “putting all your eggs in one basket”, and if you disagree with the committee-like nature of decision making in ITER, tough luck. Throughout the 1990s, against a background of economic recession, the budgets for science programmes all over the world declined. In the US, for example, the magnetic confinement fusion budget was slashed from $350m a year in 1995 to $240m in 1996. The magnetic fusion scientists in the US were aghast — they felt that this ultimately jeapordized the US ability to contribute much to ITER, as well as preventing any new domestic machines from being built and even preventing them from fully exploiting the main US tokamak, TFTR, in the years to come.
Fraying arose from other parties, too. In 1997, Japan — undergoing a nasty economic crisis — asked for a three-year delay in the planned construction of ITER.
By 1998, the conceptual and engineering design for ITER was finished. Yes, I know what you’re thinking — something that was originally designed 20 years ago is still being built; that’s all part of how difficult this kind of project is to get off the ground. At this point, it became clear that ITER would cost $11bn to build, and that this might be an optimistic estimate.
And the US Congress balked. The House Appropriations Committee was angry that they’d contributed ten years and $350m to a project which had so far failed to select a site for ITER, and cut all funding for ITER altogether. By July, the US refused to sign the extension to the ITER agreement, and by October, they pulled their scientists out entirely.
The departure of the US, along with problems from the other participants, meant that the ITER project had to be scaled down. An ITER-Lite design — the current one, aiming to produce 500MW of power rather than 1.5GW, smaller in general and not necessarily aiming for ignition and sustained burn of the plasma — was settled on. In the meantime, confinement fusion scientists at JET and JT-60 in Japan were trying to set records to demonstrate how close we might be to breakeven — this is when those record-setting runs for JET’s power production, for example, occurred.
For a long while, it seemed as if the departure of the US might have — if not killed, then severely delayed any hope for the ITER project and its ambition.
And, throughout the ITER collaboration, the US has certainly been something of an unreliable partner. In 2017, for example, they contributed just $50m to the project — in other words, the United States contributed as much to ITER that year as they did to making the emoji movie. There are ongoing battles over the funding to the project. In 2018, they planned to halve the contribution again, but ended up deciding on $122m in last-minute budget contributions, in a step that apparently prevented further delays to ITER. I should, of course, point out that it’s not just the United States that has been reluctant to contribute to the spiralling budget — or where politics can pose an issue in the future. As I write this, it’s still totally unclear what will happen with Brexit — and, given that this will probably come out within a year, that will probably still be true when you hear it — but, if Britain were to leave without a deal, we would no longer be able to contribute to the ITER project (despite hosting the world’s best tokamak in Oxfordshire.)
As an aside — I think you can all guess how I feel about Brexit — but one of the things that’s irritated me the most about it is the fact that, alongside leaving the EU, Parliament voted to ensure that we leave “Euratom” — a treaty that does nothing more than ensure safe transportation of radioactive materials throughout Europe, and helps us keep the facility at JET going and contribute to ITER. I honestly doubt 90% of the people who voted in the referendum, on either side, had any idea what Euratom was — or would have any objection to a sensible agreement that just helps us keep medical treatments like chemotherapy, nuclear power plants, and nuclear research going, if it was explained to them. But Brexiteer ultras voted to force us to leave this Treaty too, even though it damages cutting-edge science, the energy security, nuclear-proliferation, and healthcare in the country for apparently no reason other than that it has “Euro” in. There was even a debate on this issue, specifically, and they specifically ensured that this was part of the law. It’s just another example of how the sciences, which rely on international collaboration, are being horribly affected by short-sightedness over Brexit — the morning after the vote, no one who worked at the JET had any idea if their project would continue, although at present it seems like sanity has prevailed and it will indeed.
Anyway — all of this politics aside, on which you may well disagree, the point I’m making is that concerns and tensions in ITER’s international collaboration are far from finished today. But they’re not quite as bad as in the late 1990s, when it seemed as if the project might have to be shelved altogether.
However, with the adoption of ITER-lite, and some not-inconsiderable tweaking and lobbying, the ITER organisation managed to persuade new members to join. The redesigned ITER-lite was finished by 2001. By reducing the budget to a supposed $6bn, they got South Korea, China, Canada, and India to join in on the project — and, in 2003, George W Bush announced that the US would be rejoining the ITER collaboration. A lot of this was driven by the fact that, squeal as the US magnetic fusion scientists might, it was increasingly becoming the only game in town for them: the big tokamak, TFTR, was shut down in 1997, and they ended up working on a series of smaller devices with increasingly dwindling budgets. Ultimately, the powers that be in US magnetic fusion preferred to be part of the ITER project rather than dying a slow, death at home.
All of this was all very well. But the fact remained that twenty years after Reagan and Gorbachev agreed on the project, they still hadn’t decided on where to build the damn thing. And there was an increasingly dramatic deadlock over where to build it, with two sites ultimately narrowed down into a death-match — one in Japan, and the one in the South of France. Spain and Canada had also proposed sites — when Canada’s was rejected, it left the collaboration again, and Spain were bought off by allowing them to host some administrative buildings for ITER. Jason Parisi and Justin Ball, in their wonderful “The Future of Fusion Energy”, wryly summarise that little conflict: “In 2005, the negotiations concluded without even requiring a major international emergency to conclude them.” In reality, the project was on the verge of falling apart, with increasing rumblings from all sides to get on with it or funding would vanish.
Ultimately, France did win out, but as part of a compromise deal brokered by the ITER organisation. The EU gave Japan a supercomputer, 20% of the leadership positions including ITER’s director-general position, money to help them upgrade their domestic tokamak programme, and agreed that many of the contracts for materials would come from Japanese countries. The EU also agreed to pay a greater fraction of the budget, and a $600m materials research centre would be set up in Japan to create all of those complicated materials required for ITER to succeed. It was a compromise that allowed the project to keep going.
By 2006, the seven participants formally agreed to fund the creation of the reactor: the EU, as hosts, would contribute 45% with the other participants funding around 9% each. The initial plan, in 2006, was to have the reactor operating by 2018 at a cost of around $7bn. The final agreements were signed in 2007, and preparation of the site began in 2008.
Of course, this was just the beginning of the problems. By 2014, it was increasingly clear that ITER did not have a prayer of being delivered on time and beginning experiments in 2018.
The mood inside the project was very bleak at this point. Raffi Khatchadorian did a great profile of the ITER Organization in the New Yorker, around the time that this crisis point really bit:
“Morale is through the floor, and one can expect cynicism, disagreements, black humor. “There is anxiety here that it is all going to implode,” one physicist told me. Many engineers and physicists at iter believe that the delays are self-inflicted, having little to do with engineering or physics and everything to do with the way that iter is organized and managed. Key members of the technical staff have left; others have taken “stress leave” to recuperate. Not long ago, the director-general, Osamu Motojima, a Japanese physicist, who has run the organization since 2010, ordered workmen to install at the headquarters’ entrance a granite slab proclaiming iter’s presence. People call it a tombstone.”
What was going wrong at ITER? There were, of course, a number of issues. The cost of some of the key materials rose. You probably noticed that the construction started in 2008 — the year of the global financial crisis, which plunged everything into uncertainty and made cash harder to come by. The initial budget was wildly optimistic — it was never going to cost $6bn to build. This estimate came from before the design was even finished, and didn’t include any of the realistic costs of actually manufacturing stuff in the world where things go wrong and the best-laid plans tend to fall apart. And managing this huge, international collaboration led to its own problems. Everyone competed over who would get to manufacture the most valuable and lucrative components for ITER. On at least one occasion, this led to outright farce. For example, the central vaccum vessel that houses the tokamak. It’s built up out of 9 sections. It would make perfect sense for one nation to produce all 9 sections, right? So they’re identical? But no. For the sake of compromise, the EU built 7 of them, and South Korea built 2. They needed to be identical, but South Korea’s sections were designed to be welded together, while the EU’s used bolts. It’s pretty mindboggling that such nonsense can occur for political reasons, but I imagine anyone listening who’s worked on this kind of collaborative project might be thinking of their own pet disaster.
Finally, in the spirit of collaboration, there’s not a great deal of central organisation. Each country has its own domestic contribution — and sometimes, as was the case for the US, they just fail to fund them. Rather than a central group with representatives from all these nations, then, there are seven little ITER groups that don’t always communicate with each other correctly — or even have the same amount of funding, or the same motivations. Ouch.
Conflicts between these Domestic Agencies are a big part of why ITER was delayed so severely. For example, what if one group wants to tweak a particular part of the design in a way that makes your part more expensive or difficult to manufacture? Gridlock. Even getting people to coordinate across a single language, or use the same scientific terminology, is a struggle.
When European engineers who had invested decades of research on tokamak inner walls proposed building iter’s, a Chinese official stood and, deeply upset, argued vehemently that it was the height of arrogance to presume that China could not manufacture a wall. And so it was decided: China would make part of the wall.
Other problems arose along the way, associated with the manufacture of individual components. Remember that central solenoid, the one that continues to crack slightly with every pulse? When it was first designed by Japanese manufacturers, it could only last for 6,000 pulses — 1/10th of what it was supposed to. Eventually, the cable was modified and refined, and it was successful, but not before there was considerable concern that this might prove impossible to build, and with a two-year delay in this component’s manufacture. Meanwhile, the concrete for the floor had to be precisely-levelled to within centimeters.
In 2014, there was a very damning internal report about ITER that was leaked to the press. It criticised the lack of project management inside ITER. In one particularly damning passage, it said “We were unable to observe a sense of urgency, a passion for success, a commitment to rapidly finding solutions for problems, or an agile or nimble project organisation.” The report criticised everything, from the initial over-ambitious schedule, to the decision-making processes, to the lack of good management, through to the lack of communication within the organisation. There was no shortage of smart and dedicated people working at ITER — but, as I’m sure you will all appreciate, that does not mean that you don’t have a totally dysfunctional organisation.
Loyal listeners will think back to our Buzzkill episodes, where I talked about the economics of fusion — and how even the big fission plants are subject to costing billions of dollars and overrunning their schedules by years and years. The truth is that these huge mega-projects are almost always subject to the same problems. A big part of it is sunk cost fallacy, which is also wonderfully detailed in the Khatchadorian article: the deputy-director (who denies making this comment), supposedly said: ‘If you spend as much money as you can, after the first billion no one is going to stop us,’ and so he spent and spent and spent,” one former iter engineer told me.
This is not unique to fusion, of course — here in the UK, our high-speed railways are another example of a project horribly delayed and horribly over budget. ITER, for all that it represents a remarkable achievement and a wonderful vision, is a textbook example of the problems that plague building huge facilities (and aren’t nearly as bad when it comes to smaller, modular renewables or technical achievements that have been done before.)
Pressure over the budget and timescale of ITER have led to compromises over its design. One example is in handling neutrons. As the team try to figure out how to assemble all of these components, every little change counts — every little change can determine how many neutrons are going to smash into particular parts of the apparatus. If the pressure on sensitive parts is too high, then they’ll simply have to turn ITER’s temperature down until it can operate safely. Even now, it’s difficult to predict what the final performance of ITER might end up being — and whether it can sustain that performance without damage to the reactor itself. If ITER could achieve 10x power gain, but not without destroying its interior, that would be a horrible irony and suggest much more work for future reactors. Similarly, ITER’s original design called for two divertors — now, they’re just going with one.
I really don’t care to imagine for too long what it must’ve been like to work in this organisation when it was undergoing all of this criticism and media scrutiny. Especially because, as I’m sure you’ll agree — fusion research attracts a certain kind of person. You don’t *have* to be like this — but chances are you will be idealistic, seduced by the dream of contributing to this vast, cathedral-like undertaking that solves the world’s energy problems and propels humanity into a new golden age. Also: you have dedicated your entire career to this pursuit. If you start working on ITER in 1990, or 2000, expecting it to start experimenting in 2018, the idea that the project might get delayed by a decade (or cancelled altogether) is immensely concerning to you as an individual. You will have spent decades working on it. The mismatch between the timeline and the delays, the expectation and the reality, and the huge amount riding on such a vast scientific project… It must have been extremely difficult and stressful for everyone involved.
It was clear in 2014 that something had to give, and, indeed, something did. The management team were removed, and replaced by a new manager — Bernard Bigot — in 2015, with the goal of turning the project around. ITER admitted that it would not succeed in its original goal to achieve fusion experiments by 2018, and instead adopted its current timetable: first plasma in 2025, first deuterium-tritium experiments in 2035. The tokamak itself is currently under construction. With the new, delayed timetable and new management, there’s generally more optimism about the potential for the project to succeed, although it will almost certainly wind up costing more than expected. It is currently estimated to cost around 17 billion euros or $20bn to construct — and then it will cost $302m a year to run for the 20 years that it runs, followed by a decommissioning phase that might set you back another billion euros. So we’re realistically talking about a total cost for the project that is at least $30bn and ultimately no one knows how much it will end up costing — a far cry from the $5bn that was originally claimed. Come ask me in 2050 what it actually cost. At present the construction phase is estimated to be around 60% complete.
And the work is ongoing. As I write this, I can tell you that the most recent development is that the big cryogenic lower chamber for the magnets has just been lowered into the tokamak pit — the single largest component of ITER, on the move.
I cannot honestly tell you whether ITER will succeed in some or all of its primary science goals or, like NIF, will be remembered for falling short of its promises. As I’m sure you will appreciate — with every new machine component that’s introduced — there is potential for things to go wrong, or for delays to occur. I would be surprised if a project of this complexity works the first time that it’s switched on, and I would be pleasantly surprised if there weren’t any further delays to the big roadmap that has been sketched out at the moment. It would represent an incredible triumph if, 40 years after it was first proposed by Reagan and Gorbachev, ITER lights up with plasma and — 50 years after it was first proposed — it successfully achieves its designed performance in deuterium-tritium fusion. It would be a scientific cathedral — the culmination of decades of work on behalf of thousands of people. Will it work? I don’t know. Will it provide a commercial path to fusion energy by 2050? Of that, I’m even less sure. Am I excited to find out what will happen? Of course! Am I terrified in equal measure that something might go wrong? Of this, there is no doubt. I guess we’ll all have to find out together. If this show is still running by then, I’ll be sure to keep you posted.
Thank you for listening to this episode of Physical Attraction. We’ve taken you from the very dawn of fusion energy right up until the present day — well, nearly. Alongside ITER and the mainstream narrative of fusion, there are a vast array of fascinating startups who are aiming to achieve the same goal by very different — and perhaps more commercially viable means. Over the next few episodes, we’ll discuss Fusion’s Dark Horses.
Then, it will be time for us to conclude this truly epic journey that we’ve been on together. We will take a look over what we’ve learned, and talk about the possible future of fusion energy — and, overall, just try to take it all in and figure out precisely what the hell just happened and where we’re going next. Until then — see you around!
Nuclear Fusion: Can A Startup Build A Star?
-> In talking about smaller tokamaks, good to introduce the fusion physics from Chapter 8 of Parisi and Ball — Troyon limit for disruptions, kink limit for plasma current, technological limit for B-field etc., empirical “Greenwald” limit for plasma density, also practical limits based on how much punishment the materials in the tokamak can withstand and so on. This will then explain why your options are to make superconducting magnets with a higher B-field work, *or* make a much bigger tokamak (and both routes currently explored by various people.) Various different trade-offs that arise in fusion engineering and design.
some stellarators and some other approaches via C9.
Arguably, the story of humanity’s attempts to build a sun on Earth started in earnest with an ambitious dream, and promises that weren’t kept. The cynics would tell you that this is where the story ends.
Great strides have been made in the two main approaches to fusion — magnetic confinement fusion, and inertial confinement fusion with lasers. But problems have remained.
As generation after generation of fusion device merely revealed more subtle and complex ways for plasma to be disobedient, initial optimism died away. As scientists searched for ways to smooth out each successive instability, optimistic predictions about its timeline became clichés, and then bitter jokes. “Fusion is the energy source of the future — and always will be.”
Devices got larger and more complicated. Spitzer’s first stellarator was built in a disused chicken coop and fit on a tabletop. The facility that houses the Wendelstein 7-X, Germany’s state-of-the-art stellarator, cost €1bn. The experiment itself took 18 years to construct — and was a decade overdue.
The story in inertial confinement fusion was similar: the largest ever experiment laser fusion experiment, the National Ignition Facility (NIF), saw its budget quadruple from initial estimates to over $3.5 billion. When it became clear that it would not achieve “ignition” — or scientific breakeven — the facility shifted in focus to weapons research.
Today, for many people, humanity’s efforts towards nuclear fusion are synonymous with the construction of the ITER tokamak — by far the largest fusion experiment ever carried out. An international collaboration between the EU, Russia, Japan, China, India, South Korea, and the US, the ITER tokamak is likely to cost well over $20bn to construct. While the aim is to achieve Q = 10 — a 500MW power output for 50MW of energy input for the plasma, equivalent to many conventional power stations — a further device, DEMO, will need to be constructed to function as a practical power plant.
On paper, given how hard fusion has proved to achieve — not for want of trying — it may seem as if only a huge, international collaboration on the scale of ITER has a prayer of success. Just as it took the Large Hadron Collider to find the Higgs, there is a sense that science is moving into an era where it takes more than a few gifted experimentalists with “sealing wax and string” to change the world. Only science and engineering on an industrial scale, with virtually bottomless budgets and thousands of collaborators, can hope to succeed on a project this fiendishly complex.
But with that international collaboration comes significant disadvantages. We’ve described how ITER is hardly the most dynamic and swift-moving of organisations — in fact, it’s decades behind schedule.
For all of these reasons, not everyone is on board with the ITER timetable.
In the shadow of ITER, there are countless smaller fusion projects. Some are start-ups focused on a particular, novel means of fusion: others are spun out from research efforts at big technology companies, or plasma physics laboratories in universities. They are often funded by backers who can afford to risk millions chasing the prize of fusion. Jeff Bezos, Elon Musk, Bill Gates, Paul Allen and Peter Thiel are just some of those who have backed private fusion projects. There’s even a venture capital fund, Strong Atomics, which is solely devoted to investing in fusion start-ups.
If fusion lives up to the hype and becomes an integral part of the energy mix, it’s clear why ambitious startups would want to be the first to achieve net power output and reap the rewards. But there are deeper concerns about putting all humanity’s fusion eggs in the ITER basket.
A classic parable is that of the primitive civilization that decides to go to the moon. Their top scientists build successively bigger balloons, and note that each one can reach a greater height than before. Naturally, they spend decades constructing colossal balloons, convinced that the next one will finally achieve their goal.
It’s true that tokamaks have the most impressive record: they have been studied for decades, and we’re getting better at understanding how their plasmas behave. They hold records for confinement time and energy production. The improved stability of tokamaks led Western scientists in the 1960s to abandon the stellarators and pinch devices they had worked on, as soon as they confirmed that the Soviet prototypes were performing as well as intended. But there will always be those who think these other ideas were abandoned too hastily.
And there is always a risk that, as we venture into new territories for plasma behaviour, some new instability might emerge that means ITER can’t achieve its targets, in the same way as NIF was unable to achieve breakeven for inertial confinement fusion. Who would be willing to spend millions of dollars on the next fusion reactor then?
“We can build these machines until the cows come home. I am wondering in my own mind, how long do you have to beat a dead horse over the head to know that he is dead?” This was what Senator John Pastore, on the Appropriations Committee, said about magnetic confinement fusion… back in 1964.
Then there is perhaps the more troubling question: what happens if ITER performs exactly as it’s supposed to? By 2027, scientists would have proved that, with a couple of decades and billions of dollars worth of funding, you can produce a relatively small amount of power in an extremely complicated way. The first actual attempt to harness that energy in a power plant, DEMO, can’t begin serious construction until the data from ITER has been analysed. Even an extreme optimist would conclude that, if ITER’s roadmap is the only route to fusion, it won’t power anything until 2050, perhaps much later. On this timescale, arguing that fusion will save us from dangerous climate change seems like a fantasy.
Indeed, those behind the start-ups would argue that — if ITER truly is “The Way” to nuclear fusion as its Latin name suggests — then fusion is a commercial dead end. Private investors who are willing to risk billions of dollars on a power plant that uses experimental technology are few and far between. Investment is easier to come by for smaller projects that will realise a profit more quickly. This reluctance is a big part of the decline of nuclear fission power, from 18% of the world’s electricity share in 1996 to 11% today, as projects like Hinckley Point C and Wylfa face severe delays, budget overspend, or cancellation. All this for a technology that has successfully generated energy for decades!
Wind turbines and solar panels have substantial commercial advantages. The Kamuthi Solar Park in India has a nameplate capacity greater than ITER, at 650MW, and cost around $700m to build. The construction stage took only eight months, and finished in 2016. The electricity generated by solar panels is already cheaper than the most optimistic estimates for the cost of electricity from nuclear fusion: who knows how low it could fall by 2050? In other words, unless it’s extremely difficult and expensive to store the energy from renewable power sources, it’s hard to see how tokamak behemoths will compete against renewables and storage in the cruel world of the free market. Now, it’s true that many of the people advancing this argument resent ITER for sucking up billions of dollars in government funding that might otherwise go towards their projects, leaving them reliant on convincing private investors. But this doesn’t make concerns about ITER’s commercial viability any less valid.
Fusion start-ups are nothing new. In fact, if you bought a copy of Penthouse magazine in the 1970s, you were unwittingly funding one such project: the magazine’s owner, Bob Guccione, poured millions of dollars into an experimental fusion reactor after reading an interview with a disgruntled fusion scientist in one of his magazines who felt that the mainstream efforts were doomed to failure. But, as the race to find clean sources of energy to satisfy an ever-growing demand continues, and ITER continues at its glacial pace, dozens of companies with their own pet approaches have tried to beat them to the punch.
If ITER-style tokamaks do turn out to be a commercial dead end, then these fusion dark horses might be the only way the technology can be viable as a major source of energy. Various different projects use a fascinating mix of different technologies. Some are revivals of older ideas for nuclear fusion; others hope to leverage new materials or techniques to make fusion possible in smaller tokamaks.
Why do the designers of tokamaks insist on endlessly building ever-larger machines? It’s not a totally ridiculous approach. For a start, intuitively, building a bigger reactor can allow you to produce more energy. If a plasma ever “burns” — i.e. fusion reactions supply the energy to keep the plasma hot — then its energy balance in that steady state is important. It loses energy through its surface area, as particles leak and photons of radiation escape and carry energy away from the bulk of the plasma. It’s heated by the fusion reactions, which are proportional to the amount of nuclei that can fuse, which is proportional to the volume of the plasma. Generally, then, a bigger device — and a larger tube of plasma — means more reactions with a comparitively smaller surface area, and fewer losses of energy. For this reason, intuitively, “bigger” is often “better.
Then you get into the fact that getting the plasma into a fit state to fuse requires large equipment. You need a big, complex apparatus to heat the plasma, involving accelerating neutral beams of ions to quick speeds and producing electromagnetic waves to heat the plasma. You need a big, powerful superconducting magnet and all of the associated apparatus to cool that down to temperatures near absolute-zero. You need some big stores of energy to charge up those magnets. You need to make sure that all of this equipment is sufficiently protected from the damaging neutron radiation that arises when fusion takes place. That shielding will inevitably take up some space too. When you take all of this into account, it partially explains why the ITER apparatus ends up being a square mile complex in total. But this immediately hits you with all of the financial problems that we discussed in the Buzzkill episodes.
-> In talking about smaller tokamaks, good to introduce the fusion physics from Chapter 8 of Parisi and Ball — Troyon limit for disruptions, kink limit for plasma current, technological limit for B-field etc., empirical “Greenwald” limit for plasma density, also practical limits based on how much punishment the materials in the tokamak can withstand and so on. This will then explain why your options are to make superconducting magnets with a higher B-field work, *or* make a much bigger tokamak (and both routes currently explored by various people.) Various different trade-offs that arise in fusion engineering and design.
There are, however, trade-offs that arise in fusion engineering and design. Take, for example, the plasma pressure — this is defined as the density of plasma particles, multiplied by the temperature. That fusion triple product — the one we want to maximise to produce a lot of energy — is then just “plasma pressure” multiplied by “confinement time.” You can get a larger plasma pressure if you have a greater plasma current, a stronger magnetic field, or a smaller plasma cross-section. This is known as the Troyon limit. And the closer plasma pressure gets to the limit, the more likely you are to experience disruptions in the plant.
So when you’re designing a plant, you have a choice. Do you design a small plant that operates with a huge plasma pressure — and risk bumping up against the Troyon limit and damaging disruptions? Or, do you design a slightly larger plant, with a smaller plasma pressure but perhaps a longer confinement time, or that can produce more energy — and then risk it not being economically viable?
The more heating you need to provide to drive up the plasma temperature, the more energy you’ll need to provide to the system, and more fusion reactions you’ll need to be able to “earn” that energy back. So you might think that one solution would be to have an incredibly dense, but relatively cool plasma — really jamming in the plasma as much as possible. After all, typical plasma densities are around 10²⁰ particles per cubic m in a tokamak. That might sound like a lot, but it’s actually much less dense than air, which has around 10²⁵ molecules per m³. Surely there’s room for improvement here?
Well — unfortunately, there doesn’t appear to be. The limit on number density, which is set by the size of the plasma cross-section and the current that’s run through it, is the least well understood. This limit, called the Greenwald limit, doesn’t come from magnetohydrodynamic theory like some of the other limits we’ll discuss. Instead, it just arises from empirical observations — many experimental campaigns in tokamaks have shown that, if your plasma gets too dense, if you try to inject too much fuel into the tokamak — then you end up causing disruptions.
You might think that one solution is to drive a really high current through the plasma — but there are limits to how large this current can be. If the current driven through the plasma is too large, the external magnetic field struggles to contain the plasma as a whole, and the whole thing kinks and writhes out of control. There is another formula that determines the limit where this happens — the maximum current you can drive through the plasma before the kink instability kicks in. This turns out to be proportional to the external magnetic field — stronger B-fields can contain larger plasma currents — and inversely proportional to the size of the torus — a larger donut can contain a larger plasma current.
So, although there are many different tweaks in all the different tokamak parameters that you can dream up that might impact on performance, there are two clear roads to getting a better tokamak. Either — build a larger one, and reap advantages from having more fusion reactions and a longer confinement time. If you want to build a smaller tokamak, though, you’ll need a much stronger magnetic field to help confine that plasma. So most of the startups that are hoping to build smaller tokamaks are going to do so by leveraging stronger magnetic fields than those that ITER can use.
Commonwealth Fusion Systems is one such start-up, and perhaps the most promising. Spun out of MIT’s Plasma Physics department, they aim to leverage high-temperature superconductors that weren’t available when ITER was designed to create a much smaller tokamak. High-temperature superconductors like YBCO (Yttrium-Barium-Copper-Oxide) are often capable of producing a higher magnetic flux density for smaller amounts of material — which may allow for a smaller tokamak. Theoretically, the amount of fusion power a tokamak of a given size can produce scales with the fourth power of its magnetic field density — so more powerful magnetic fields are very important. Its current design aims to produce a fifth of ITER’s power output, for brief ten-second bursts, using a tokamak with around a quarter of the diameter of ITER.
Yet these HTSCs pose more tricky engineering challenges than the niobium-tin superconducting wire, which is far easier to create and manipulate at scale: as they’re ceramics rather than metals, they can be brittle and temperamental, and no one has yet built large-volume magnets out of this material. A further issue — that has also plagued ITER, and even Bob Guccione’s “Riggatron” — is the flux of high-energy neutrons. It’s difficult to even test materials against this kind of radiation without a fusion reactor, as neutrons with this amount of energy are tough to produce. The smaller the tokamak, the more intense the neutron flux will be. CFS propose to use liquid shielding that can be easily replaced when it becomes irradiated, but, like all neutron shielding devices, this is untested and will result in some amount of radioactive waste from the tokamak. If the expensive and delicate HTSC magnets cannot be adequately shielded, it’s back to the drawing board.
CFS has been financially backed to the tune of $65m by oil company Eni and Breakthrough Energy Ventures, the venture capital firm whose list of investors reads like a Who’s Who of tech billionaires. They argue that their proposed SPARC (Smallest Possible Affordable Robust Compact) reactor, doesn’t rely on any unproven plasma physics to work. Empirically-derived scaling laws in plasma physics mean that similar behaviours can often be expected for smaller tokamaks operating at larger magnetic fields. While this reactor would still take hundreds of millions of dollars and years to build, its CEO, Bob Mumgaard, hopes that the design could leapfrog ITER to produce net power more quickly.
Tokamak Energy are another private venture that hopes to leverage the decades of research expertise on the tokamaks to avoid straying too far from the scientific mainstream. The classic tokamak is toroidal — donut-shaped, with a hole in the middle. Tokamak Energy uses a spherical tokamak, with a much smaller aspect ratio and central hole. When developing the theory of plasmas in tokamaks in the 1980s, researchers noticed that several instabilities — including the kink instability — were suppressed by changing the geometry of the plasma. Spherical tokamaks replaced the magnets on the inside with a single conductor, allowing them to be built more cheaply.
Several spherical tokamaks were built alongside existing tokamaks, such as the MAST spherical tokamak in the same Culham laboratory that houses ITER’s predecessor, JET. They boasted improved resistance to instabilities, but the geometrical differences meant that the plasma pressures that could be achieved were lower, and the central conductor was directly exposed to the radiation from hot neutrons.
Many of the difficulties with making spherical tokamaks practical is that — if you’re substantially shrinking the size of that donut hole — you’re going to be able to fit much less in that interior. Fitting in the vaccum vessel, neutron shielding, the central coil, and the coils that spread around the vessel for the tokamak in a small space will prove difficult. Generally this means you need to cut back on neutron shielding, but this means that the central solenoid is exposed to neutrons. They can damage the superconducting magnets, and reduce their lifetime. They can also heat the superconductors above the temperature where they stop becoming superconducting. And putting large amounts of cooling apparatus in the centre there will be difficult when you’re already running out of room. Also, we’ve talked about the large mechanical forces that will act on the coils in ITER — the magnetic forces that push them apart. These have to be counteracted with big, mechanical supports that keep the magnets in place. The ability to do this will be limited in a spherical tokamak.
So ultimately, it remains unclear whether the advantages in plasma performance from a spherical tokamak will really overcome the disadvantages that arise from having much less space to cram all of these components in, which might limit the type of magnetic fields you can operate with or reduce the total strength of the field that’s needed. It may also, ironically, require you to have an extremely large device — just so that the small, internal donut-hole is large enough to fit everything in — which then may defeat the idea of having a smaller tokamak through having more stable plasma in the first place, and then run into all the same economic problems that large devices like ITER may face.
Although many in the research community were intrigued by the advantages of spherical tokamaks, they were a few generations behind the mainstream torus design, and their behaviour is consequently less well-understood. Tokamak Energy, which spun out of Culham’s research with the MAST device and is based in Oxfordshire, has received over £50m in funding for its spherical tokamak, and in June 2018 achieved plasma temperatures of 15 million kelvin — which is still dwarfed by JET’s 200 million kelvin. Much like Commonwealth Fusion Systems, they are hoping to exploit high-temperature superconducting magnets to achieve things that have defied previous generations of tokamak.
Those who back tokamaks will point out that, over the last few decades, their performance has increased by orders of magnitude. It might seem like the fact that nothing has so far succeeded in breaking even means that it’s time for a brand-new idea, but this failure to reach a symbolic (if important) goal does hide some considerable progress. Between the 1960s and the construction of JET, the fusion triple product of density, temperature, and confinement time has increased at a rate comparable to Moore’s Law for transistors. Who’s to say that the next bright idea for a fusion reactor won’t also require decades of development and progress just to catch up with tokamaks?
But, of the dozens of nuclear fusion start-ups out there, only a fraction are focused on tokamaks. While there are clear advantages to tweaking a design that is this well-studied, it also means that many different tokamak devices have already failed to achieve breakeven. If it turns out that tokamaks really do need to be ITER-sized or substantially larger to be worthwhile in building, then they might never compete financially with alternative sources of energy. Many companies are therefore venturing into the unknown — in the hope that they’ll find an easier, cheaper route to nuclear fusion.
General Fusion, a Canadian company that has received over C$150m in various funding rounds from the Canadian government and private investors, uses a technique called magnetized target fusion. This is a mix between confining plasma with magnets and compressing it rapidly to high densities and temperatures in an implosion, aiming to satisfy the Lawson criterion with contributions from confinement time, temperature, and density.
Their reactor design is like something out of steampunk science fiction. Liquid metal, in the form of molten lead and lithium, is spun around rapidly — creating a vortex in the centre of the sphere. The plasma, held in magnetic fields, is then injected into the centre of this vortex. Then, steam pistons push the metal rapidly towards the centre: the vortex collapses, and, hopefully, the deuterium-tritium fuel is compressed and heated to fusion conditions, and releases energy in the form of fast neutrons. This energy heats the liquid metal, and this heat can be extracted to drive a turbine in a conventional power-plant. Founder Michel Laberge, who described this as his “mid-life crisis” plan to save the world from global warming in a TED talk, argues that the liquid metal will also act as shielding which will absorb the neutrons before they can damage any other parts of the reactor.
Meanwhile, early prototypes of General Fusion’s device, based on a concept that was first investigated by the US Navy in the 1970s, have created some neutrons: usually a sign that at least some fusion reactions are taking place.
But producing neutrons is far from proof that you can produce energy. In Britain, in 1958, the ZETA device — based on a technique called “pinch” where a strong current is run through a plasma and the resulting magnetic forces heat and compress the plasma, hopefully resulting in a burst of fusion — produced neutrons. The newspapers had a field day and reported that “limitless energy” was finally on the brink of being achieved: but the neutrons arose from a tiny fraction of nuclei, and ZETA had no hope of reaching breakeven. It was famously an erroneous detection of neutrons that led Fleischmann and Pons to conclude that they had discovered “cold fusion”, in one of the most embarrassing debacles in scientific history.
When the US Navy originally investigated this idea in the 1970s, they abandoned it for the same reason that laser fusion has fallen into the doldrums: it’s extremely difficult to get the fuel capsule to implode *exactly* right, with the near-perfect symmetry that’s required to allow inertial confinement fusion to produce more energy than is input. Any slight deviations on the surface of the shock wave will cause Rayleigh-Taylor instabilities, with tendrils of plasma bursting outwards as the fuel is compressed, reducing the temperature, density, and hence the number of fusion reactions that can take place. Laberge hopes that — now pistons can be controlled by servos and computers to be closer to simultaneous, operating within microseconds of each other — it will be possible to generate that perfect shock wave to achieve net energy gain, and succeed where NIF’s lasers failed. While a current demonstration model uses fourteen pistons, the final design is likely to require hundreds of pistons to achieve this all-important symmetry.
Skeptics would suggest that, if getting the spatial and temporal coherence required is difficult with lasers — where the beams can be very finely controlled using optical devices — doing it mechanically may prove even harder. I would count myself as one of those skeptics.
Meanwhile, surrounding the fusing target with a blanket of liquid metal is elegant. It would absorb the radiation and heat, and neutrons, and potentially gets around all the extremely complicated aspects of having neutron shielding and first-wall materials. It provides a means for harnessing any energy you do generate from fusion. You can mix lithium and lead, as they hope to in their device, and absorption by the lithium breeds tritium, so that your reactor is producing its own fuel in the future. But — physically compressing the plasma with liquid metals has its own problems. You will struggle to prevent impurities from the liquid metal from getting into the plasma, and, when that occurs, it’s difficult to see how you will get to fusion conditions, because the impurities will radiate away energy and ruin the idealised conditions for the fusion fuel pellet that you’re trying to get to. In early tokamaks, impurities that were introduced simply by having slightly dusty tokamak walls were enough to degrade and ruin the performance. Impurities from a divertor or first wall that break up in ITER under pressure are a concern. When you are essentially attempting to compress the plasma with large amounts of lead “impurity”, it’s hard to see how lead doesn’t end up in the plasma target.
Nevertheless, attempts to generate fusion through shock waves launched at hot, dense plasma confined by a magnetic field — magnetized target fusion — are being pursued at other institutions, including Los Alamos. Helion Energy, which has received millions from sources including the US Department of Energy and startup incubator Y Combinator, aims to generate this compression using pulsed magnetic fields: their fifth-generation device, “Venti”, went online in 2018. We’ll talk about them — and some of the other start-ups pursuing different weird and wonderful routes that may, someday, hopefully lead to fusion — next week.
Can A Startup Build A Star? Part Two
Tri Alpha Energy, which has received over $500m, manages to employ around 150 employees, and has sustained itself as a company since around 1997, have demonstrated that they can confine plasma for milliseconds in their reactor.
They aim to deploy a form of magnetized target fusion that resembles particle accelerators in many ways: confining spinning plasma in magnetic fields, and then accelerating it in their “plasma collider”, hoping to reliably produce energy when the collisions happen. This is far from the magnetohydrodynamic limit where plasma behaves something like a fluid: instead, it operates in this strange, transitional regime of plasma behaviour, where tracking the individual motions of at least some of the trillions of particles is important. There are dozens of different parameters that can be adjusted on the machine: once the spinning blob of plasma is formed from the collisions, it’s then bombarded by neutral hydrogen atoms to heat it to fusion conditions, while simultaneously being controlled by the magnetic fields.
Given that Tri Alpha can run a plasma “shot” every eight minutes during operation, and given that they are probing this huge parameter space full of non-linear interactions and complex behaviours, it’s perhaps no surprise that they are very into data analytics. In a high-profile collaboration with Google, they explored the vast number of settings for the machine, looking for interesting behaviour — and found a more stable regime where, for a few milliseconds, radiative losses were cancelled out by the energy delivered from the neutral beam injection. It’s clear that Tri Alpha Energy can produce fascinating new plasma physics in this extremely complex regime: but it’s impossible to know if some magical combination of magnetic fields, plasma acceleration, and ion bombardment will allow them to exploit these non-linearities to create some Holy Grail of plasma states that will generate more energy through fusion than it requires to set up.
Ball and Parisi note another concern with the Tri-Alpha Energy, which is in its use of plasma fuel. Most reactors use Deuterium-Tritium fuel, but TAE uses proton-boron fuel. In this reaction, a single proton fuses with a nucleus of boron, which splits into three alpha particles and releases a great deal of kinetic energy — hence the name, Tri-Alpha Energy!
There are some advantages to this reaction, for sure. For a start, you’ll notice that there are none of those pesky neutrons involved. Alpha particles are charged, and therefore much easier to stop — in fact, a few sheets of paper is probably sufficient to stop alpha particles from escaping the device, although you would end up with some heavily irradiated paper. With no neutron damage to components, there are many engineering challenges you no longer need to worry about, and your reactor can be much simpler in its design. What’s more, since you are producing fast-moving charged particles, you might be able to harness energy directly from the motion of those charged particles, rather than using the whole inefficient thermodynamic process of using the heat to create steam that spins turbines, etc., etc., etc., And, finally, another advantage of the proton-boron reaction is that the fuels are really easy to come by, abundant, and naturally occurring. A proton is just ionised hydrogen, and there’s plenty of hydrogen about (it can be made easily from water). Boron costs around $5 per gram, and there are billions of tonnes of it in the Earth’s crust in ores like borax. So you don’t need to worry about getting ahold of or handling tritium, which is rare or radioactive, and you don’t need to worry about using your device to breed more fuel.
So you’re obviously thinking: why isn’t everyone pursuing proton-boron fusion? It obviously has a whole bunch of advantages compared to the deuterium-tritium reaction. Well, there are reasons — pretty damning reasons. First off, D-T fusion has a much lower energy barrier — it’s more than ten times more likely to happen, and can occur at ten times lower temperatures than proton-boron fusion. When everything is taken into account, you need a fusion triple product that’s at least a thousand times larger with proton-boron fusion, and you’ll need to heat the plasma to even higher temperatures than the super-duper hot sun temperatures that have been attained in JET and other devices.
But the real killer comes in the form of particle radiation. As we’ve said on this show plenty of times, charged particles radiate when they accelerate and decelerate, or charge direction. In the case of a D-T plasma, most of the energy losses are through turbulence, or particles escaping confinement. For proton-boron fusion, though, the high charge of the boron nucleus — which has five protons and five electrons — makes things extremely difficult. For p-b fusion, each boron nucleus adds five electrons to the mix that do nothing but radiate away energy in the plasma.
If you do the calculations, you can show that, under fusion conditions, a D-T plasma will radiate away just under 1% of the energy it produces as its charged particles move around in the device. However, a proton-boron plasma will radiate away perhaps 200% of the energy that it produces. Since it radiates away more energy than it produces, you’re never going to get “ignition” in a proton-boron plasma — it will never be able to operate without an external heating source, because the more energy is produced by fusion, the more that the particles will radiate away.
It may still be possible to harness some energy from proton-Boron fusion. But to do this, you’d need to constantly supply some external heating. You would then need to be sure that you can capture and usefully harness a huge fraction of the power produced by fusion reactions. Parisi and Ball estimate that — even if your only losses are from this radiation, and there’s no turbulence, which seems unlikely — that you would need to have heating * power-collecting efficiency of greater than 47% just to produce net electricity. The heating efficiency of neutral beam injections is already 30%, so that would need to get substantially more efficient. And converting the energy released from fusion would need to be extremely efficient as well, far more efficient than fossil fuel power plants, for a much more complicated device. All that’s just to get Q > 1 — to get the thing to be economically viable is much harder.
So proton-boron fusion might be a unique selling point for tri-alpha energy, and on paper it has a huge number of advantages as a fuel, but actually harnessing net energy from these reactions will prove awfully difficult — as it’s a fuel you can’t really “burn”; it will always radiate away more energy than it produces to heat itself, and so it cannot be “ignited” in any real sense of the word.
Lawrenceville Plasma Physics is a start-up that hopes to pursue proton-boron fusion, but it will try to do so with big pinch-like devices — arcs of electricity that briefly produce very hot and dense conditions in the plasma. Such devices have been used as neutron sources, as you can get thermonuclear reactions out of them, but generally aren’t considered that viable for fusion reactors. It acts a lot like a particle accelerator — but these particle accelerators mean that currently, only around 1 in 25 ions actually undergoes a collision, and so the rate of fusion reactions is very low. Since a lot of energy is put into accelerating particles that then don’t fuse, you can’t afford many losses. But accelerating the particles themselves is, of course, leading to this same radiation issue with proton-boron fusion from before. LPP are worth mentioning because they do have a particularly ingenious idea. If you can produce high enough magnetic fields with the device, then quantum mechanics kicks in and modifies how the electrons gyrate and travel — which may, in theory, reduce the amount of energy that the electrons have. This could produce a strange plasma, with hot nuclei — hot enough to collide and fuse — but cold electrons, that won’t radiate energy away so effectively. While this idea, using the quantum mechanical magnetic field effect, is really awesome and neat — it would require magnetic fields that are unbelievably high. They would have to be around a million tesla — close to the magnetic fields found at the surface of a neutron star, and more than a thousand times stronger than anything we’ve ever created on Earth (at least, without utterly destroying the device that created it.) Needless to say, this has never been demonstrated as an experiment, and would appear to be an awfully long way off — if it’s even at all possible. Without this trick, then LPP’s device will also be doomed to radiate away far more energy than it can produce.
Proton-boron fusion is not the only alternative-fuel fusion reaction that has been proposed. Helion Energy — as its name hints — would use deuterium that fuses with Helium-3, rather than deuterium and tritium. Remember, tritium is hydrogen with two neutrons, so Helium-3 is similar but with one neutron replaced by a proton.
This produces fewer neutrons, so simplifies power plant design. And its triple-product is not that high — conditions needn’t be too much hotter or denser than you need to produce D-T fusion. The only problem is that helium-3 is extremely rare. In fact, most of the stock of helium-3 that we actually use comes from decaying tritium! It’s expensive, and doesn’t exist on Earth in the required quantities. Helion energy propose creating their own Helium-3 from Deuterium-Deuterium fusion reactions. But these produce neutrons in themselves, and so, you can see that we’re right back where we started again — needing to find some way to produce our fuel with neutrons in a self-sustaining way, and needing to deal with reactions that have some pesky neutron production involved.
First Light Fusion is another start-up based in Oxfordshire, close to perhaps the world’s most famous working fusion reactor, JET. They’re aiming to succeed in inertial confinement fusion, but argue that their scientific strategy is based on working with plasma instabilities, rather than trying to suppress them with ever more complicated devices. Instead of trying to achieve a perfectly symmetrical implosion of the target, they use computer modelling to design strange, asymmetrical targets, which are then compressed rapidly by shock waves. This approach grew out of the founder’s DPhil research at the University of Oxford into cavity collapses.
The plasma in the target won’t be heated uniformly, but First Light Fusion hope that an asymmetrical collapse can still produce regions of fusion in the target, where temperatures and densities are high enough, that are large enough to provide net energy. The company aims to be able to rapidly prototype and test new targets, iterating towards a perfect target geometry — perhaps aided by machine learning or improved plasma physics modelling. So far, they have received £135,000 in funding from sources including the UK Government, and have demonstrated implosions on asymmetrical targets.
Lockheed Martin caused a considerable stir when they announced in 2014 that they were working on a compact fusion reactor — with the ultimate aim of using it to power aeroplanes. At the time, the claims were striking from a reputable company: five years for a working prototype, and ten years for a design ready for mass-production. A more recent update in 2017 suggested that this aim had run into problems: their estimate for the required weight has ballooned from a 20 tonne conceptual design to a 2,000 tonne prototype. That’s difficult to fit on an aeroplane, but — if it worked — would still be less than a tenth of the size of ITER. The secretive Skunk Works department is not short of the technical expertise, or funding, to become major players: but they are reluctant to publish too many technical details.
A major issue with tokamaks that Lockheed’s “Compact Fusion Reactor” aims to solve is the issue of the beta limit. There is only so much plasma that a tokamak’s magnetic fields can hold in place, and the beta limit essentially tells you how the plasma density you can achieve — which is important for that triple product of density, temperature, and confinement time — for a given magnetic field.
The beta factor is essentially set by the geometry of the reactor design: for tokamaks, the beta limit is around 0.05 or 5%. This means that, to get a high triple product, you need long confinement times, high temperatures, and strong magnetic fields — which usually translates into bigger machines, like ITER. Spherical tokamaks improve the situation, with a record beta achieved by the Korean tokamak “START”, at 0.4 or 40%.
Lockheed hope that their design will have a beta limit approaching 1, meaning that you can have plasma densities 20 times higher with the same magnetic field strength. This would then allow for a substantially smaller reactor.
How might this high beta be obtained? Again, it’s a matter of geometry. The aim is to find geometries that exploit the plasma’s own internal magnetic field, and the currents that flow along its surface as the plasma is confined. This magnetic field would push against the external field that contains the plasma, creating — hopefully — a self-tuning feedback, where the further out the plasma goes, the stronger the B-field pushes back to contain it.
Concepts similar to this were developed in the 1950s under the name of the “magnetic mirror”, but the magnetic bottles were leaky and containing the plasma particles difficult. With new geometries — some of which may look like many-pointed stars or spinning tops laid end-to-end — it may be possible to limit this leakiness to small regions, or “cusps”, where the magnetic field changes sharply. If these losses can be kept sufficiently low, then, researchers hope, this combination of cusp confinement and magnetic mirrors might allow plasma to be contained for long enough to produce net energy through fusion.
Yet Lockheed are reluctant to publish much experimental data — beyond the computational plasma modelling that initially led them towards building fusion devices. It’s difficult to know precisely what techniques they’re using, and if they have achieved a breakthrough in cusp confinement compared to the original researchers — or even rival companies like EMC2 that uses cusp confinement in its Polywell devices.
According to Justin Parisi and Jason Ball in their marvellous book, “The Future of Fusion Energy”, Lockheed are purusing a device that’s similar to the magnetic mirror, but with extra coils of magnets placed inside the device which hope to additionally shape the field and prevent the plasma particles from escaping, except through these cusps. This is theoretically quite promising, but issues arise from the practical implementation of the magnetic fields and the coils inside the device itself. In these thin, cusp regions close to the coils, there are likely to be large amounts of turbulence. A cusp is a sudden change in conditions — big pressure, temperature and magnetic field gradients — and these big gradients drive turbulent conditions. Having a whole bunch of turbulent plasma right next to your superconducting magnetic coil is an extremely difficult problem. How can you remove what might be megawatts of heat from these coils to keep them superconducting? How can you hold the coils in place? You will evidently need large mechanical supports to keep them there, and you’ll also need to run coolant through those supports to keep the internal coils cold. But these supports will be extremely close to the hot, turbulent plasma: if plasma touches that solid material, it will probably melt the supports, and impurities will be introduced into the plasma itself. Lockheed has apparently proposed shielding these supports with more magnetic fields — but this will in turn be really difficult.
The plasma is moving incredibly quickly, at incredibly high temperatures, crossing the device many millions of times every few minutes; so even if the shielding only fails to block the particles a few times, you can expect these supports to disintegrate.
In other words, while the design may well work wonderfully in simulations or on paper, and provide good confinement for the plasma — practically building something with these coils actually inside the device is likely to be a huge, maybe impossible engineering challenge.
The fact that their projections, both for the size of the device and the timeline for its construction, continue to get more pessimistic — they’re now talking about the early 2020s — is a familiar story in the history of fusion: and it’s a story that rarely ends well. An unclassified briefing in 2017 [http://www.thedrive.com/the-war-zone/20289/china-touts-fusion-progress-as-new-details-on-lockheed-martins-reactor-emerge] suggested a few of the early designs in the cycle had already failed to work as intended. Success in fusion doesn’t happen overnight: it’s still an open question as to whether our heightened understanding of plasma physics, our supercomputers, and our superconductors will enable tweaked versions of the magnetic mirror design to overcome the problems encountered decades ago.
So it’s exciting to see a company like Lockheed with a good reputation and a great deal of talented scientists and engineers taking alternative approaches to fusion seriously — but we should not be convinced that they have cracked it just because it’s a private company… And a major part of the reason that people in the mainstream fusion community are skeptical was because of the nature of their initial announcements, where they were talking about having fusion reactors that could fit on the back of trucks within a decade or so. Even the auxiliary equipment to heat the plasma, inject the ions, establish and cool the superconducting magnetic field etc. would fill a building with today’s technology.
Making bold claims without setting any roadmap that shows how it’s possible is perilously common in these startups, and doesn’t win you any friends.
We discussed some of the alternative means of pursuing inertial confinement fusion in our episode on NIF — such as the fast drive method, or using NIF’s large laser in direct-drive to illuminate the capsule directly. But there are also major alternative routes towards magnetic confinement fusion. One of them that is particularly worth discussing is the fact that the stellarator has not been totally abandoned.
Remember the difference between a stellarator and a tokamak. In a tokamak, the magnetic field that helps to avoid particle drifts is given by running a current through the tokamak’s plasma. In the stellarator, you avoid running any current through the plasma, and instead try to cancel out these drifts using a really complicated, externally imposed magnetic field. This means you have to sacrifice a simple shape for your system — it ends up having to whirl and writhe around, resembling one of those really fast roller-coasters or a complicated, twisting race-track with lots of ups and downs.
Similarly, the coils and the magnetic fields they produce don’t have simple shapes, either. Instead, they must also twist and writhe, taking on complex shapes. Physicists generally love symmetry, and it’s not because we’re obsessively lining up everything to look good like Wes Anderson directing a movie. We love symmetry because it simplifies our equations, and because it’s easier to predict. If you have a symmetrical situation, you need to keep track of less information. Imagine something that’s symmetric across one axis, so that it doesn’t matter where you are along the axis. Then your problem becomes two-dimensional, rather than a three-dimensional problem. The equations are simpler to solve; behaviour is simpler to predict and may be more useful.
A torus, a donut for a tokamak is nice and symmetric — the orbits that the particles will follow in a tokamak are easy to predict and calculate. But the complex, asymmetric shapes provided by a stellarator are way more complex and mind-bending to work with. If you design your stellerator incorrectly, then the standard orbit of the particles might inevitably cause them to drift out of the device altogether.
The more complicated the design, the more expensive it will be to build, and the more precisely you need every part of the curvature to be correct, the more that things can potentially go wrong. There’s a reason that you don’t tend to see machines with bizarre, artistic curvature that looks more like a sculpture. And, if every stellarator takes 20 years to build and is costly like the Wendelstein, then stellerators are likely to suffer from many of the same problems becoming economical power sources that currently plague tokamaks.
Designing and building a tokamak, and predicting how its plasma might behave, are already fiendishly difficult tasks; the stellarator makes these tasks even more complicated. But it’s not just about simplifying the design or the calculations. Because tokamaks are, relatively, simpler, you can usually predict the temperature and density of plasma in real time when it’s running. This allows for some degree of real-time “control” and feedback over the system — you’re able to respond to changing conditions in the plasma and perhaps tweak the field, current driving, or heating to attempt to get better performance and behaviour from the plasma in a tokamak. But this can only be done because the equations can be solved faster than the bulk of the plasma is moving and changing around. This kind of feedback and control mechanism, which is part of ITER, will be much more difficult to implement in stellarators.
The funky shapes required by stellarators pose their own engineering headaches, as well as computational and theoretical ones in determining how the plasma will behave. Manufacturing a nice, symmetrical coil for a tokamak is relatively simple; but manufacturing a complicated, twisty set of magnetic coils for a stellarator is more difficult, and more specialised. Because the shapes and sharp bends in the coils that might be required by these 3-D coils are complicated, they are also inherently weaker and more prone to mechanical strain. Because the typical forces that act on a stellarator’s coils usually depend on the magnitude of the magnetic field that you try to pass through it, this engineering challenge sets a limit on the maximum magnetic field that you can feasibly use in a stellarator.
So, given all of these caveats, why are people still enthusiastic about stellarators? There are several reasons. With no plasma current to be disrupted, disruptions aren’t a problem. With no plasma current limits that are set in place, you won’t get large kinky instabilities either. In fact, in general, the failure mode for stellarators is less damaging than that for tokamaks. What happens in stellarators is that plasma particles tend to drift out towards the walls, gradually cooling the plasma and leading to a loss of confinement. They can be leaky bottles. But you don’t get violent disruptions as you do with tokamaks, which could potentially damage the equipment. If, for example, ITER is ruined by disruptions — or they prove hard to predict or mitigate — then looking for a reactor with a less damaging failure mode could be an advantage. It also circumvents some of the materials science challenges that we talked about: suddenly, the first wall and the divertor don’t need to be quite so strong to stand up to the punishment from disruptions.
It may be easier to get net energy out of a stellarator, once you get it working, and if confinement times are long enough. Once you’ve heated the plasma to the “ignition” stage, such that the energy to drive more fusion reactions is provided by fusion reactions alone, then you don’t need to provide the plasma with any more energy. You don’t need to drive the plasma current, as you do in a tokamak. In fact, the main source of energy consumption would probably be in keeping the superconducting magnets cool — as, once current is flowing through them and their magnetic field to confine the plasma has been established, they won’t require any more energy either.
The fact that stellarators are like this — that they essentially, intrinsically operate in a steady state — means that it is easier for them to withstand longer pulse durations than other devices. Stellarators have been run that have been able to confine plasmas for almost an hour, which is substantially longer than the few minutes that is set as the record for tokamaks — although the triple product, and hence the fusion energy being produced, is much lower than the tokamak record for stellarators.
The final reason that people are quite excited about stellarators is the idea that, perhaps, the complexity of stellarators could be an advantage in itself. Yes, it means that the behaviour of the plasma is more difficult to model and understand, but it also gives you more degrees of freedom — more knobs that you can twiddle in the hope of finding a perfect parameter combination. As our ability to computationally model how plasmas will behave gets better, it may be possible to work out how they will behave in extremely complex magnetic field geometries. Machine-learning algorithms and optimisation algorithms could even try to “explore” this huge space of different stellarator designs, in an attempt to determine which design will produce the best performance, before the stellarator is actually built. It may seem a little unlikely that there will exist some magical, fiendishly-complex arrangement of magnetic field lines that proves to be just perfect at confining hot, dense plasma for months on end for fusion to work. You are effectively hoping to find that perfect design for a magnetic field that will cause all of the little drifts to perfectly cancel out. But the huge array of different designs one can imagine for a stellarator means that it’s very difficult to rule out this idea entirely.
The largest ongoing stellarator project at the moment is probably the Wendelstein 7-X, which was planned from 1997 and opened in Germany in 2015. Like many other fusion projects, it also ran over-schedule and over-budget, eventually coming in with a price-tag of 1 billion euros. Determining the complex, twisty design of the Wendelstein required supercomputer time and the latest magnetohydrodynamic simulations. It’s in the early phases of its operation at present, but it’s already reaching triple products that are close to what JET has achieved, and confinement times longer than JET has managed. By 2021, they are hoping that the device will be continually operated for 30 minutes — and remember, this continuous operation is going to be key for any fusion plant to be economically competitive, whether it’s a tokamak or a stellarator. As the big stellarator project, there’s a lot riding on it for advocates of this kind of fusion reactor. If ITER fails or stalls while the Wendelstein performs better than expected, funding might end up getting diverted into stellarator research projects. On the other hand, if the Wendelstein performs less well than expected, or encounters some new problem, you can expect the stellarator revival to fall out of favour once again.
So how is the Wendelstein doing since it was switched on? Here’s the project’s scientific director, Thomas Klinger:
“The Wendelstein stellarator project is following a step-wise approach to full operation, much like ITER will. We have conducted two experimental campaigns and are now preparing for the third. We started the first experimental campaign in 2015 with somewhat of a “naked machine” — the machine was constructed, successfully commissioned and had created first plasma, but at that time, it was not yet equipped with proper plasma-facing wall components or a divertor. Instead, it just had a limiter and a metal wall.
Over the following 14 months, we worked on the divertor, the machine’s exhaust system for extracting heat and particles, which enables us to control the density and the purity of the plasma. In addition, we installed graphite tiles in the areas of the vacuum vessel with higher heat loads.
The divertor and the in-vessel cladding were real gate-openers. We saw a whole new world. We could increase the heating power and achieve much longer plasma discharges, but we still had problems with obtaining high plasma densities. We identified the problem — oxygen impurities emanating from water released by the graphite tiles were strongly emitting light. We solved the problem by conducting wall conditioning by boronization (oxygen is “pumped out” by boron). All of a sudden we had clean plasmas, the oxygen light emission dropped by a factor of ten, and we were able to ramp up plasma densities to much higher values.
Consequently, in our most recent campaign in 2018, we could extend the pulse duration, achieving higher plasma temperatures and densities. With an input heating energy of 200 MJ we achieved a 30-second plasma at 6 MW, and at reduced power we achieved a 100-second plasma at 2 MW. These are among the best results achieved so far by any stellarator.”
Most recently, in 2018, the Wendelstein’s second phase of experimentation was completed, and it’s now being upgraded to reach that peak performance with — hopefully — confinement times of up to 30 minutes. Highlights from the first phase included reaching the maximum triple product ever achieved by a stellarator, and successfully testing the graphite plated first wall and the divertor for the stellarator. As I write this, in 2019, it’s still undergoing the upgrade for the next phase of experimentation, including introducing a water-cooled first-wall and divertor system so that it can operate at higher energies, and for longer times, without melting down these components.
There is clearly no shortage of alternative, dark-horse projects aiming for nuclear fusion. Each has its own charms, challenges, and quirks: each can point to its own unique selling points. Many are backed by millions of dollars of venture capital funding, and attract disaffected plasma physicists left out in the cold by the focus on ITER. Some offer tweaks to the mainstream designs; some resurrect ideas that were abandoned decades ago in favour of tokamaks; while others venture into more radical, unproven territory. Many have compelling scientific arguments surrounding why they might succeed — and all share the compelling economic argument that nuclear power, and fusion in general, will require smaller reactors than ITER to be commercially viable.
In many ways, away from NIF, ITER and the other big tokamaks, the landscape of fusion today resembles the 1950s more than anything else. Then, too, there were dozens of different ideas for advancing fusion being enthusiastically pursued by each of their devotees, and it was difficult to say which — if any — had the best chance of success. The naïve optimism of the 1950s — that fusion might be easy to achieve — has been replaced by a tempered techno-optimism. Now we know more, we can overcome the challenges. Now we have access to high-temperature superconductors and machine learning, as well as decades of plasma physics results, we aren’t going into a landscape that’s totally unknown. Different devices and different approaches, painstakingly, have been able to achieve continuous improvements in fusion over the decades without hitting a wall that no-one can find their way around. Fusion, then, seems to be merely incredibly difficult, and not impossible.
Yet ultimately, we shouldn’t kid ourselves that what they’re predicting is extremely likely to come to pass. Published science from the startups is not always particularly complete, and doesn’t often back up the hype-PR claims that they will have viable fusion reactors within a decade. The triple-products achieved by these machines are, universally, many thousands of times smaller than those achieved by more established rivals like tokamaks and stellarators. Many flowers may bloom in the fusion startup world, but they’re relying on an astonishing amount of luck — or seeing some design that’s millions of times better than what’s been pursued for decades — to overtake tokamaks. And we have seen over and over again through fusion history tales of scientists who thought their devices were far closer to achieving net power from fusion than they really were — or who simply oversold the potential of the device to get funding. Perhaps the most promising candidates are those that tweak the tokamak design with new, high-temperature superconducting magnets — they operate with a far greater degree of science behind them. But even these startups still have a long way to go.
It can seem amazing that, nearly seventy years after the first promises were made that fusion would produce our energy within a few decades — and after countless broken promises of this kind — there are still optimists out there who can say it with a straight face. But then again, perhaps it’s not so surprising. To believe you can succeed where decades of research, generations of plasma physicists, and billions of dollars have yet to, you need a sunny disposition.