N/B: A podcast script from Feb 8, 2018, which I’ve dug up due to its new relevance.

One chapter in Nick Bostrom and Milan Cirkovic’s book, Global Catastrophic Risks, deals with something called the millenarian response to global catastrophic risks. It’s a fascinating area, and I think it leads us into a bigger discussion about the psychology of the end of the world, so… here we go.

Broadly speaking, millennialism doesn’t have anything to do with being a “millennial” in terms of being born in the 90s and remembering Buffy the Vampire Slayer. It is a name given to a set of different viewpoints about how the future is going to unfold. They all share in common a deeply ingrained sense of destiny: things have to unfold in this particular way; it’s almost inevitable that they will. And in a lot of ways, you can characterise it as the following: “Millenialism is the expectation that the world as it is will be destroyed and replaced with a perfect world, that a redeemer will come to cast down the evil and raise up the righteous.” And so there are lots of different versions of this. You can see that it’s closely linked to the idea of a utopia, where the current society is destroyed and replaced by a perfect one; or a general apocalypse — maybe nothing will survive, and the ‘perfect world’ will be one completely without humans: kind of like a Gaia’s Revenge where our own hubris as a species leads to us being destroyed, and nature returns.

This is the kind of apocalypse that has been predicted before; the grim, industrial terrors of the human-made Earth replaced by something cleaner, something greener. Here’s John Betjeman, the poet Laureate:

“Come friendly bombs and fall on Slough!

It isn’t fit for humans now,

There isn’t grass to graze a cow.

Swarm over, Death!

Come, bombs and blow to smithereens

Those air -conditioned, bright canteens,

Tinned fruit, tinned meat, tinned milk, tinned beans,

Tinned minds, tinned breath.

Mess up the mess they call a town-

A house for ninety-seven down

And once a week a half a crown

For twenty years.

And get that man with double chin

Who’ll always cheat and always win,

Who washes his repulsive skin

In women’s tears:

And smash his desk of polished oak

And smash his hands so used to stroke

And stop his boring dirty joke

And make him yell.

But spare the bald young clerks who add

The profits of the stinking cad;

It’s not their fault that they are mad,

They’ve tasted Hell.

It’s not their fault they do not know

The birdsong from the radio,

It’s not their fault they often go

To Maidenhead

And talk of sport and makes of cars

In various bogus-Tudor bars

And daren’t look up and see the stars

But belch instead.

In labour-saving homes, with care

Their wives frizz out peroxide hair

And dry it in synthetic air

And paint their nails.

Come, friendly bombs and fall on Slough

To get it ready for the plough.

The cabbages are coming now;

The earth exhales.”

A millennial belief, then, for the destruction of Slough, and the return of the town to the cabbages.

Even Norse mythology has an element of this, as James Hughes points out in his essay in Bostrom’s book; Ragnarok involves men and gods being defeated in a final, apocalyptic battle — but because that was a little bleak, they add in the idea that a new earth will arise where the survivors will live in harmony.

Of course, a lot of millennial beliefs are exemplified for some of us by aspects of Christian theology, although it actually only really became mainstream in the 19th and 20th centuries. It’s the kind of thing that drifts in and out of fashion in a major world religion. So in this case, you have ideas like the Tribulations — perhaps many years of hardship and suffering — before the Rapture — when the righteous will be raised up and the evil punished — and then the world will be made anew, or humans will ascend to paradise. Lest anyone accuse me of picking on the Christians, let’s point out that Marxism is really the exact same thing. There’s a sense of destiny — Marxism is all about a deterministic view of history that builds to a crescendo. In the same way as Rapture-believers look for signs that the world is becoming unstable and prophecies are beginning to be fulfilled, so Marxists look for evidence that we’re in the latest stages of capitalism. They believe that, inevitably, society will degrade and degenerate to a breaking point — just as some millennial Christians do. In Marxism, this is when the exploitation of the working class by the rich is too great, and they band together and overthrow their rulers in a proletarian revolution. This, then, is the tribulation; the struggle, and the trials, and the apocalypse that must precede the world being made perfect, and the world being made new. Sometimes revolutionary figures, like Lenin, or Marx himself, are heralded as messiahs who accelerate the onset of the Millenium. And, of course, there is judgement, when the righteous workers take what’s rightfully theirs and the evil bourgeoisie, and the system of capitalism more generally, is destroyed. In systems like Mao’s China, or Stalin’s Russia — where the revolution has decidedly happened, and the “Millenial” event has therefore already occurred… but there is no paradise on Earth… you have to change the date of the Tribulations a little bit. Instead, they say that the Tribulations are still occurring as the proletariat must struggle towards attaining full communism, at which point the world will have been completely remade. This involves converting non-believers, too. In fact, in neither system is there any room for non-believers.

Maybe you now think I’m being harsh to Communists and fundamentalist Christians. Let’s point out that similar belief systems exist in many of the world’s major religions — and also the unspoken religion of a lot of atheists, which is the belief in technology. Just look at Ray Kurzweil and his futurist predictions! We talked about these a lot in our episodes on the singularity. A quick recap; it’s basically the idea that at some point, artificial intelligence outstrips human intelligence, and then society is radically altered in some way as exponential development overtakes what we can currently imagine. In Ray Kurzweil’s vision, the singularity is the establishment of paradise; everyone is rendered immortal by biotechnology that can cure all of our ills; our brains can be uploaded to the cloud and we can experience deeper and more wonderful interactions and unions with each other; inequality and suffering wash away under the new wave of these technologies. There is disruption, especially at first, as the singularity arrives and people panic about what to do. There is even a sort of judgement day.

Quoting from the Michael C Carlos museum.

“The Egyptians viewed the heart as the seat of intellect and emotion; as such, it played a central role in the rebirth of an individual in the afterlife. The heart of the individual was weighed against the feather representing the goddess of truth, Ma’at, in a judgment process overseen by Osiris, the lord of the underworld. The judgment was a frequent subject for funerary art, especially on papyri and coffins. Central to the scene was a large balance, with the heart in one pan and either a feather or a tiny figure of Ma’at, in the other pan. In most scenes, a demon called Ammit, “the Devourer,” crouches below the balance, anxiously awaiting the outcome. Should the heart of the deceased prove to be heavy with wrongdoing, it would be eaten by the demon, and the hope of an afterlife vanished. Oddly enough, the Egyptians never seem to have depicted the negative outcome of the weighing, only the joyful individual being received by Osiris and presented with offerings.”

Perhaps in Kurzweil’s singularity, something similar goes on. As our technology improves, a final reckoning approaches; our hearts, as humans, will be weighed against a feather. If they prove too heavy with wrongdoing — with misguided stupidity, with arrogance and hubris, with evil — then we will fail the test, and we will destroy ourselves. But if we pass, and emerge from the singularity and all of its threats and promises unscathed… then we will have heaven.







And, of course, other people view the singularity as nothing less than the end of the world; from Terminator-scenarios where AI replaces us out of maliciousness, or by some accident or failure in how we programme it… the apocalyptic version of the millennium. And, like the other belief systems, there’s no room for non-believers; all of society is going to be radically altered, whether you want it to or not, whether it benefits you or leaves you behind. In some cases it almost seems like Kurzweil is talking about a technological rapture.

It almost seems like every apocalyptic threat, or a threat that might seem apocalyptic, is responded to in this way. Nuclear weapons provided a similar dichotomy when they first arrived; either this would prove the final straw, and we’d destroy ourselves, or the nuclear energy could be harnessed to build a better world. People talked at the dawn of the nuclear age about electricity that was “too cheap to meter.” (Yeah, what exactly did happen to that?)

When we see the same response over and over again to different circumstances, cropping up in different areas, whether it’s science, religion, or politics; we need to accept something. This is really a part of human psychology. We *want* to believe in this way; and so when the idea of artificial intelligence outstripping human intelligence emerges, we can stick it onto the millennial bandwagon.

As humans, we have no intrinsic love of facts. We don’t love information. We might want to believe that we are rational beings, but it’s not always true. We are creatures of narrative. We love narrative. Physicists observe the world and we weave our observations into narrative theories — stories about little billiard balls whizzing around and hitting each other, or space and time that bend and curve and expand. Historians try to make sense of the hodge-podge of events and spin them into narratives that are consistent and pleasing to the eye, even if they’re not always happy ones. We narrate our lives as we go; explaining where we are now; using the tropes we pick up from fiction, from each other, and from the world around us to guide us as we construct the narratives. Things that don’t fit into the stories we tell ourselves make us uncomfortable, or get discarded; and if we can’t do that, then we have to change the story to fit them in. Maybe it’s another trial on the way for the hero, another obstacle they have to overcome, but all good stories have that, right? We create tories that explain things. Stories that allow us to believe that we understand things. Stories that are useful. That make sense of the past, justify the present, and prepare us for the future.

And as stories go, the millennial narrative is a brilliant and compelling one. It can lead you towards social change, as in the case of the Communists, or the Buddhist uprisings in China. It can justify your present-day suffering, if you’re in the tribulation. It gives you hope that your life is important and has meaning. It gives you a sense that things are evolving in a specific direction, according to rules — some rules, ANY RULES — and not just randomly sprawling outwards in a chaotic way. It promises that the righteous will be saved and the wrongdoers will be punished, even if there is suffering along the way. And, ultimately, a lot of the time, the millennial narrative promises paradise.

And it’s a lot more exciting than a vaguer approach that’s somewhat closer to the truth. Which is something along the lines of: there are many ways that things can unfold, none of them are necessarily predetermined by unstoppable forces beyond our understanding, there are maybe probabilities that each will happen, and lots of the outcomes are less extreme than you might think. That’s not satisfying. That’s not how humans work. It’s so much easier to think of things as either signalling the end of the world or the dawn of a utopia — or possibly both at once. It’s a narrative that we can get behind; a compelling one; a good story: and maybe, a nice dream.

And it can help us. Maybe an appropriate example, or at least a punny one, is the Y2K bug. If people hadn’t been so worried about the millennial consequences of, um, the millennium, we might not have prepared for it as well as we did, and things could have been far worse. But it can harm us, too. A lot of people in climate science, for example, are concerned about a pivot in public perceptions. One that goes straight from denial to fatalism. As in; instead of believing “This isn’t happening, it’s a hoax”, people go straight to believing “there is nothing we can do to stop it, so we might as well not bother.” Fatalism is an appropriate response to a millennial narrative. Is it an appropriate response to the problems that we face? It might be an easy one.


BASICALLY BOSTROM CHAPTERS
-> check the one on insurance and see if that’s worth including?

The tendency to assess everything according to these quite extreme, albeit lurid narratives… it makes real assessments of risk very difficult to carry out. The issue is that human judgement is prone to all kinds of cognitive biases. For example; how about the simple fact that an event that could threaten human extinction has never occurred in living memory? (Although people might argue that the Toba super-eruption 75,000 years ago, which may have caused a population bottleneck with only a few thousands surviving individuals, comes close.) This lack of apocalypses in our lived experience, while undoubtedly a good thing, makes it difficult for us to assess the risk; we’re not used to calculating risks for things that have never happened. Then there’s the hindsight bias. Take nuclear weapons; we now know that the outcome of the Cold War was not a thermonuclear exchange between the Russians and the US. (Pipe down, historians, who argue that the Cold War is not yet over.) We can rationalise that fact after-the-fact, and put it into our narratives: whether it’s that, ultimately, the USSR was unstable, or that politicians are unwilling to risk their own destruction, or whatever — it almost seems like, well, of course mutually assured destruction works, and nuclear war was unlikely. But it didn’t seem like that at the time. What if it turns out that, somehow, the probability of a catastrophic nuclear war during the Cuban Missile Crisis was 50%? Not too far away from estimates made by people involved at the time. Let’s make it a coin toss for simplicity. We are happier assigning a probability to a coin toss; if it comes up heads, we’re equally happy to believe that the outcome could just as easily have been tails. But when the stakes are as high, and as narratively loaded as the apocalypse… it doesn’t sit right to think that we’re in the half of all universes that made it. Even though that very well could be what has happened. Instead, in hindsight, we judge things to be inevitable. Bostrom’s book mentions this in a well-written essay by Yudkowsky, which refers to a study where people were told the same information about a historical war — and then different ‘results’ as to who won. In almost all cases, the participants said, well, it’s inevitable — obviously Team A was going to win. And I imagine, in the burned-out husks of New York, London, and Moscow — the survivors would probably be telling each other that it was inevitable that a nuclear war would occur, sooner or later. That’s what we call hindsight bias. It shows up our ability to make probabilistic assessments of the world around us — to marshal reasonable responses to perceived risks. The nightmare scenario is always more lurid; so people are more concerned and devote more mental energy to fears about being murdered or dying in an accident than they do about dying of heart disease, even though one is far more likely than the other.

Equally, there’s the availability bias; humans tend to assess as more likely the concepts they can most easily think of. You can see how this could lead us to both overestimate and underestimate potential threats — or even just misunderstand what the real threat is altogether.

Nicholas Taleb talks about Black Swans — rare events that nevertheless do occur. If we don’t prepare for this kind of event, then we are setting ourselves up for failure. And then, after that, the hindsight bias means that we fail to assess what the black swan means. Take, for example, the financial crisis. It broadly occurred because subprime mortgages were packaged and sold alongside other investments, which led to a liquidity crisis when people realized that a lot of what they were buying could be debts that wouldn’t be repaid. The hindsight bias lesson is that subprime mortgages — where you offer mortgages to people who are less likely to be able to repay them — are a bad idea, inevitably leading to ruin. The real lesson is that, in such an intricately integrated and complex system as the financial services industry, problems that should be relatively small — a few people defaulting on their mortgages — can lead to a global economic downturn, with consequences that are even now unfolding. Similarly the 9/11 attacks, which have led to a massive increase in airline security and a much greater focus on Islamic extremist terrorism and Al-Qaeda in particular. But there were similar warnings beforehand about hundreds, if not thousands of threats to national security. In hindsight, with the laser-like focus on AQ, we can question why they weren’t stopped — and we can massively overestimate the power of the group. The lesson, as Taleb eloquently argues, should be that black swans happen. Be prepared.

When you apply this to the world as a whole, you get ‘anthropic bias’. All we know for sure is that we exist, this one, intelligent [CITATION NEEDED] species. Our planet has managed to survive for long enough for us to develop. But we don’t know whether this is because the rate of events that destroy planets is really low, or whether we’re just the cosmic equivalent of flipping heads a hundred times in a row.

Another fallacy that arises in our ability to assess risks is the conjunction fallacy. Here’s my spin on the classic example. If I tell you that Richard writes poetry, grows his hair long, and smoked weed in college, rank the following statements from most to least likely:

1) Richard has a blog on Tumblr

2) Richard is an investment banker

3) Richard plays in a band on weekends, when he’s not busy investment banking.

People generally assess the last statement as more likely than the second. But it contains an extra detail. By the laws of probability, it must be at least as likely — after all, not all investment bankers play in bands on the weekends! Adding plausible details to a scenario can make us assess it as more likely than we would do otherwise. Another example relates more directly to catastrophizing. People were told they were going on a trip to Thailand. They were asked how much they’d be willing to pay for different kinds of terrorism insurance. The first group was told the insurance would cover the flight from the US to Thailand; the second group, both flights, and the third group; the whole trip. All separate groups. The group that were told the detail about the specific flight offered to pay the most; then the group where flights were mentioned; then the whole trip group. Mentioning the flights — a specific scenario — caused people to judge the risk as more valuable EVEN THOUGH the insurance covered less than “the whole trip.”

And, as the essay points out, this means our policy-makers can be persuaded to invest our resources in less wise ways. Consider two alternative proposals; one suggests funding for an organization that would distribute food in the case of an emergency. The second details a scheme that will counteract the threat of cyberwarfare from China. Which is more likely to get funded? Chances are, the second one — which addresses a specific threat, that captures the mind of the politician and the public more easily — will get the funding. But the reality is that the first scheme might be more useful *because* it’s more vague; it can be used in more cases. If it turns out that a pandemic is going to disrupt society, then defence against Chinese cyberwar is useless. This has been pointed out before — resources in governments have been diverted away from general-purpose disaster response and towards addressing specific threats.

Let’s put this in a relevant context. In the last series of episodes, I’ve described in lurid detail ten possible scenarios for a cataclysmic event that could severely disrupt society as we know it. And I bet, during each episode you listened to — as the details for potential scenarios rolled in — you were assessing that specific threat as more and more likely. If I asked you to assign probabilities, maybe you’d give nanotech a 10%, nuclear war a 10%, pandemic or bioterror a 10%, Malthusian catastrophe a 5%, and so on… while listening to each episode. Why not pause the episode and try it now?

Okay; now add the probabilities up. When I did this, I got a solid 50% for the end of the world occurring at some point in the 21st century. But that sounds way too high for me, for some generic apocalypse. If you’d asked me to estimate “any TEOTWAWKI scenario” I’d probably have just said 20% or so. This idea that specific risk scenarios make us assess something as more likely… well, it’s been an insurance salesman’s trick for decades, right? But it could confuse responses.

Sometimes we get it wrong e.g. particle accelerators. Remember when the LHC was launched, and everyone was concerned that it would bring about the end of the world? A very specific scenario was posed that it would produce “mini black holes” that could tear apart space and time. If the scenario had said something like “a collapse of the false vaccum” or “the production of strangelets” or “the density of the Omega field would diverge” as scenarios for the end of the world, people probably wouldn’t have listened, because they’re less well known aspects of physics. Especially the third one, which I just made up.

Martin Rees talks about the risks from particle collider experiments in his book on TEOTWAWKI:

“I discussed these issues with a Dutch colleague, Piet Hut, who was also visiting Princeton and subsequently became a professor there. (The academic style of this institute, where Freeman Dyson has long been a professor, encourages “out of the box” thinking and speculations.) Hut and I realised that one way of checking whether an experiment is safe would be to see whether nature has already done it for us. It turned out that collisions similar to those being planned by the 1983 experimenters were a common occurrence in the universe. The entire cosmos is pervaded by particles known as cosmic rays that hurtle through space at almost the speed of light; these particles routinely crash into other atomic nuclei in space, with even greater violence than could be achieved in any currently feasible experiment. Hut and I concluded that empty space cannot be so fragile that it can be ripped apart by anything that physicists could do in their accelerator experiments. If it were, then the universe would not have lasted long enough for us to be here at all. However, if these accelerators became a hundred times more powerful — something that financial constraints still preclude, but which may be affordable if clever new designs are developed — then these concerns would revive, unless in the meantime our understanding has advanced enough to allow us to make firmer and more reassuring predictions from theory alone.”

So you can see what Rees is saying; there are collisions from cosmic rays in the upper atmosphere hundreds of times more powerful than the ones that occur in the LHC — so if they could cause the apocalypse, it would already have happened. Obviously there’s still the subtle cognitive bias that maybe they *can*, and the probability is tiny — but even if that’s true, we’re not significantly increasing it with our experiments beyond the risk that may already exist; so we should all be happy to let them run for science.

In hindsight, then, this was a tiny risk. But the vivid, lurid, and narratively compelling scenario of a Promethean experiment that caused the end of the world probably made people overassess it. Ditto the scientists at the first nuclear tests, who genuinely thought that the atmosphere might catch fire as a result — one of them, possibly joking, put the probability at 1/3. Was it lurid sensationalism that made them overassess the risk then? Or is it hindsight bias that makes me underassess it now?

Ultimately, there are so many human biases that the whole thing can seem a little fruitless. But there are some decent lessons to take away from the whole sordid affair, before we say that it’s necessary to throw our arms up and hope that some computer arises that can correctly calculate risks. First up: general responses and readiness can be better than specific salves for specific problems, and circumvent the fact that we’re rubbish at assessing probabilities. Secondly; we need to be wary of anyone making overly confident predictions about anything. Thirdly; remember that more lurid scenarios depend on more things to make them happen. They are not more likely than vaguer ones. Fourthly; what has happened, the state of the world today, is by no means inevitable. Just because risks have been avoided in the past, doesn’t mean they weren’t a genuine threat, and we just got lucky. And no one ever gives any credit to a preventative method that was successful, with the possible exception of condoms.

But to finish, I want to return to this idea of narrative — and our millennial response to catastrophe. We all love stories and narratives. They’re often our way of coping with ideas that are too big, too confusing, or too unknown to comprehend. But these stories can be very dangerous when they overstate the certainty of the outcome. Accepting that we don’t know what will happen, necessarily, can mean embracing our power to change and prevent it. Yudkowksy talks about a sudden flip that happens in people’s minds when existential risks are discussed. People who wouldn’t dream of harming a child can suddenly say, “Well, maybe the human race deserves to die.” It’s a form of scope neglect — once things are moved into the general realm of tragedy, beyond stakes we can just about try to understand like an individual human life… As Yudkowsky puts it “Human emotions take place within a single brain. We can empathise with one person, or a small group of people, far better than we can weigh a larger tragedy. The human brain cannot release enough neurotransmitter to feel an emotion 1,000 times as strong as the grief of one funeral. A prospective risk going from 10,000,000 to 100,000,000 deaths does not multiply by ten our determination to stop it; it adds one more zero on paper for our eyes to glaze over.” We’re not really thinking about what it would mean; we’re not really allocating resources accordingly; we’re not really being prudent in what we do. If, when faced with a threat, we flip to apocalyptic thinking, or millennial thinking, because these narratives are the ones that work for us, because they’re the most satisfying ones, and because our cognitive biases mean our assessments of risk are so junky… we might miss our chance to rationally and reasonably address the problems. And the stakes of a truly existential threat — one that would wipe out intelligent life on Earth — extend so much further beyond the people we know, and the world that exists today. It is an unimaginably complex potential future, with the potential for billions of intelligent lives spreading out across the universe. We must accept that, try as we might, we cannot understand what’s at stake, or what the odds are, but that — like it or not — we’re playing the game. Better be careful, then.

Thanks for listening etc.