How Can We Save The World?
Existential risks, and the philosophy of the end

The “How To Save the World” episodes were first published in June 2018. Part I is here and Part II is here.

So, now that I’ve thoroughly frightened us all with the terrifying visions of the apocalypse, I guess I should answer the question: what can we do to save ourselves? What should we do to save ourselves?

Nick Bostrom, the philosopher who directs the Future of Humanity Institute, is concerned that we’re not concerned enough about the existential threats to humanity. There’s a paper where he explains himself which you can get for free on their website. He talks about how there are fewer academic papers published about existential risk than there are about dung beetles; and it’s true that perhaps there hasn’t been enough academic focus so far on how to avoid the extinction of the whole human race. They go something like this:

The difference between an event that kills 99% of humans and 100% of humans is a far bigger difference than the difference between an event that kills nobody, and an event that kills 99% of humans. Even if only 1% of humanity survives, there’s still a chance that human civilization can be resurrected.

So part of the logic here is that without humanity, there are no sentient beings to make value judgements about things. As in, we as humans determine which things are “good” and which are worthwhile and which should be preserved; and so you’re left with the solipsistic conclusion that all the “value” in the Universe ultimately flows from us — at least, assuming there are no other sentient beings.

Just as money is only valuable because everyone believes that it’s valuable, so things only have beauty because we perceive them to be beautiful. I guess, deep down, I probably do buy into “human exceptionalism” enough to think that this is probably true. But on the surface, I feel sad about it; it seems to imply that things have no meaning without humans to give them meaning, and while it’s hard not to come to that conclusion, I think for a modest species it’s an uncomfortable one.

The real point of Nick Bostrom’s philosophical musings is that we should be far more concerned about the far future than we actually are; and, of course, this makes sense. People are making decisions today, with an eye to profit that can be achieved in our own lifetimes, that might mean that in a few centuries, London and New York are underwater. But, on a deeper level than that: if the human experiment works, and we can exist as a species in the long-term, then there will be billions upon billions of humans who live after us. Yet the world that they live in will depend on our decisions today — whether they’ll even exist at all depends on our decisions today. If we have the potential for making a little extra money with some risky bio- or nano-technology deployed today, but it could also destroy all of that future potential for humanity — a philosopher like Bostrom would argue that altruism should extend towards humans in the far future, and we should value that future potential more than we do.

Even so, regardless of whether you go Full Bostrom and try to assign a value to all of human civilization that’s infinite — and thus make this argument that a handful of humans surviving is infinitely better than none surviving at all, and should be achieved at any cost …. Even if you don’t buy that, you can probably believe that human civilization and the species has some value. Maybe a tiny bit of value. If we change things and get them right.

The first and most noticeable pattern you’ll probably see in my top ten is the fact that the worst, and the deadliest catastrophes are all man-made. There are some natural phenomena out there that do have the potential to wipe us all out, but lots of the deadlier threats are within our hands — at least in part. The natural catastrophes, albeit devastating, are often localised in scope to specific areas. Even a naturally evolving super-virus or pandemic illness, although it has the potential to badly disrupt or even end society as we know it, is unlikely to be as effective as wiping all of us out as humans can be. The threats from outer space, broadly, are either highly unlikely like supernovae, or possibly preventable with new technology in the case of asteroid strikes. Far bigger threats to civilization are posed by nuclear weapons, biotechnology or nanotechnology running rampant, the superintelligent AI we might create, and our own rapacious desires causing us to render the planet uninhabitable, whether by using up all of its natural resources or triggering some other kind of ecological catastrophe, like runaway global warming.

Depending on your level of misanthropy, the idea that our future and the potential for civilization is in our hands might fill you with optimism — after all, aren’t we one of the first species on the planet that has established such incredible control, or maybe dominance is a better word, over its environment and surroundings?

We have less to fear from the blind hand of tragic fate, or from imagined gods that control the weather and the famines, than any previous civilization. What other species can say that the main threat to its survival is self-inflicted? For most species, the main threat to their survival is us.

Yet it’s a double-edged sword. The increasing complexity of our society, which allows us to head off these threats and control our environment — and support seven billion people — has led to this incredible, interconnected system. But the more moving parts you have, the more potential there is for something to go wrong. And in complex systems, chaos theory can begin to dominate — in the sense that seemingly small actions in one part of the system have the power to cascade in unforeseen ways, when all of the various stars align… and the more components there are in the system, the more easily this can happen. The dynamics can get really really wild, and really really difficult to predict. Let me give you an example of what I mean, via Nick Bostrom

“It could turn out, for example, that attaining certain technological capabilities before attaining sufficient insight and coordination invariably spells doom for a civilisation. One can readily imagine a class of existential catastrophe scenarios in which some technology is discovered that puts immense destructive power into the hands of a large number of individuals. If there is no effective defense against this destructive power, and no way to prevent individuals from having access to it, then civilisation cannot last, since in a sufficiently large population there are bound to be some individuals who will use any destructive power available to them. The discovery of the atomic bomb could have turned out to be like this, except for the fortunate fact that the construction of nuclear weapons requires a special ingredient — weapons-grade fissile material — that is rare and expensive to manufacture. Even so, if we continually sample from the urn of possible technological discoveries before implementing effective means of global coordination, surveillance, and ⁄ or restriction of potentially hazardous information, then we risk eventually drawing a black ball: an easy-to-make intervention that causes extremely widespread harm and against which effective defense is infeasible.”

So this idea — that, if you get smart before you get wise, and attain certain technological abilities before you’ve ironed out all of the kinks in civilization, you destroy yourselves. This is one of the solutions people have come up with for the Fermi paradox of why we don’t see a universe apparently teeming with alien life — and the one that Fermi himself was most concerned about. There are, of course, others.

Or, to summarise all of this in a more succinct way: “Can the global village deal with all of its global village idiots?”

This is exactly my concern, that I voiced a few times in this TEOTWAWKI series; this idea that the development of our technology is outstripping our morality, and our progress as a society. It seems obvious to me that there are really only a few ways we can go as a civilization, in the long run: either we destroy ourselves or partially destroy ourselves, and future generations curse us for our stupidity; or we transcend what we are today, and fail to destroy ourselves: and future generations will pity us for our stupidity.

Just think about the number of things people used to believe that seem idiotic, or hopelessly misguided to us today. The idea, for example, that people used to routinely beat their children and that this was considered good parenting is something that a lot of people find deeply depressing today. Just as we don’t know which aspects of our culture will survive into the future and be remembered, so we can’t be sure which aspects will be condemned as hopelessly barbaric.

But that is for future historians — and I pity them, because they’ll have to scroll through a hell of a lot of Twitter to get to the bottom of what went on. The question for us is simple:

HOW CAN WE REALISTICALLY AVERT CATASTROPHE?

Or at the very least, mitigate catastrophe?

An interesting question is: what should we do?

Nick Bostrom points out that our systems aren’t fantastic at dealing with the question of some of these existential risks. There are whole professions, like being an actuary, that are dedicated to calculating risk — trying to work out how likely it is that bad things will happen to people, as in the case of life insurance. But our governments aren’t very good at defining an appropriate expenditure. Before you even get into whether you should weight lives in the future more heavily, and assuming you manage to overcome all of your biases and absolutely quantify the risk, what costs do you take with respect to which risks?

It’s almost like a reverse lottery problem: plenty of people are happy to spend £1 for a one in a million chance of winning a large sum of money, but how many people are happy to spend £100 to prevent a one in a million chance of disaster?

You might think that it’s obvious we should do whatever’s in our power to prevent a threat that could cause a massive loss of life. After all, even if you insist on measuring everything in economic terms, the kind of event that would kill tens of millions of people would cost the economy trillions of dollars. So, surely it’d be worth spending a billion as an insurance policy against this kind of event?

It depends on how likely to think the event is to actually happen. I mean, if you think about it, we are all making these decisions, all the time. I am a lowly student: I cannot afford to build a thermonuclear bunker under my house. Despite spending weeks researching the apocalypse, I’ve taken no steps to prevent it from affecting me. I don’t even have one of those cheap face-masks that supposedly stops you from contracting diseases. But if I was a billionaire, yeah, I probably would use a little bit of that to build a bunker somewhere. (And, if you’re nice to me, I’ll let you come along for the ride. Make sure you bring some board games or something.)

One way of looking at how governments are assessing the risk is working out a simple ratio of the cost of their prevention versus the damage that would be done if the prevention failed. If you spend a million pounds preventing an event that could do a billion pounds worth of damage, you’re sort of pegging the probability of that occurring at around 1/1000. But obviously, this doesn’t work for everything. Some phenomena can never be prevented, so we wisely don’t invest in them — after all, if we are unlucky enough to be caught by a gamma ray burst, it’s curtains regardless of what we do.

And, of course, you run into the cognitive bias we talked about whereby we’re more prepared to pay to prevent more specific disasters. This has been shown in study after study: the more specific you are about the calamity, the more people will pay in insurance to prevent it — even if the insurance guards against fewer risks, and is worth less. So for our purposes, maybe people are willing to spend an awful lot more than they should for an asteroid shield — because they can imagine this risk, even if the probability is vanishingly small — and less for research into AI safety, because these scenarios are less well-understood. In this sense, being doom-and-gloom and painting more vivid scenarios of what could happen in the future — making the risk specific, in other words — might actually help us to spend more to prevent it. Alternatively, we might be better off not assuming too much about the nature of the threat, and just being generally more prepared for catastrophe or systemic failure. Although I have to say the fact that — the more detail you can imagine for a scenario, the more likely it seems to you to happen — this has not been great for my sanity studying for these episodes!

Let’s look at some of those specific risks. The United States and Russia still have 7,000 nuclear warheads each: arguably they could get by with a few hundred, like China does, and with fewer operating warheads, so many of the risks are also reduced. Smaller stockpiles would cost less money to maintain. As I’m sure Stephen Schwartz who we interviewed would agree, we can decrease the risk of catastrophe for negative money!

Would tighter restrictions on biotech, research into AI safety, or governments training and hiring cybersecurity experts really cost that much compared to the potential costs?

There are probably a few hundred people worldwide working on AI safety, and the whole thing is probably funded to the tune of millions of dollars — not billions. It’s hard to argue that, when a single movie can take $2bn at the Box Office, we shouldn’t be spending at least that amount on a risk that could destroy the whole human race.

In the UK, the government announced an extra £1.9bn in spending on cybersecurity; a good start. Cybercrime already costs our economy £34bn a year, though, and a lot of this is really just the cyber equivalent of theft and arson. Compare this to an actual all-out war, or what could be achieved by a fleet of highly motivated and destructive hackers… you can again argue that it isn’t enough.

Of course, with any kind of risk prevention strategy, if it works well, people will decide that it’s unnecessary. Attribution is very difficult: is the fact that there have been fewer pandemics because of UN and government spending on the issue? Or are less linked?

These kind of questions are always going to dog people who try to deal with existential risk. And, as in all economic questions, you have to weigh up the opportunity costs. What if tighter restrictions on biotech prevent life-saving treatments from being discovered? I’ve already argued that, to avoid the Malthusian catastrophe (you know, everyone running out of food) catching up with us, we’re going to need GM crops — so we can’t restrict it too much.

What if fears about artificial intelligence, or nanotechnology, cause us to regulate too harshly and squash economic development? This is, of course, the same argument that’s applied to lots of green policies by right-wing people: and it’s true that in developing these policies, we have to strike a balance between the positive effects and the negative ones. Environmental issues hold a particular problem when you analyse them, as we relentlessly seem to do for everything, through this economic lens. There’s an active debate raging about whether you can put a price on nature, and on ecosystems. Some people think that doing so is sacrilege. Take George Monbiot, who says:

“Natural wealth and human-made capital are neither comparable nor interchangeable. If the soil is washed off the land, we cannot grow crops on a bed of derivatives. Price represents an expectation of payment, in accordance with market rates. In pricing a river, a landscape or an ecosystem, either you are lining it up for sale, in which case the exercise is sinister, or you are not, in which case it is meaningless.”

Others argue that humans need some metrics to compare the ‘value’ to us of, say, cutting down a forest and turning it into paper, vs. the ‘value’ of keeping it standing. They would say you have to work within the paradigms of the society that exist at the moment — and that, if you don’t put a value on nature, the free market which currently dominates in most things will value it at zero.

A similar problem arises when it comes to climate change. What value does it have if we can make things more liveable and more comfortable for people in 2100? Given that the greenhouse gases and the fossil fuels we burn today will make the environment less stable, and the resources scarcer, for future generations; what is that worth?

Use of fossil fuels could allow economies to grow more quickly. Economic growth, while it’s not perfectly linked to things like healthcare, does reduce death rates due to disease, infant mortality rates — all kinds of things. Some people would argue that the best course of action is to rely on future technologies that will allow us to adapt to climate change, or even reverse it. Such a course of action is obviously risky; another risk to evaluate.

Here again it depends on how much you buy into Bostrom’s argument about weighting future populations:

“If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do. There are so many people that could come into existence in the future if humanity survives this critical period of time — -we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.”

But if you take this to the moral extreme, there’s no limit to what you might permit. If you think that, say, AI is the biggest threat, and we’re not ready to programme it well or responsibly yet — that we’re not wise enough to use these new technological powers — why not argue that everyone currently working on AI or similar should be stopped? This is an argument for addressing climate change, too: the effects will be with us for thousands of years, and therefore billions of future people will be impacted by the decisions we each make today. How well does this argument fly? How much do you believe it? The truth is that humans don’t all act like perfect utilitarians, and what we find to be “good” is often very different to what we’d like to think it is. Which all complicated decision-making.

You see these kind of trade-offs all the time. For example: air pollution, in the form of aerosols, currently cools the Earth to tune of between 0.3 and 1.1 degrees celsius. It also results in the premature deaths of millions of people. If all you care about is global mean temperature, you’d probably make fewer efforts to get rid of that pollution — but, of course, this isn’t all that we care about.

And what about the last, great, mass extinction: the one caused by human beings? What is the market value of a unique species? Is it set higher for cute animals like pandas, and lower for abundant, ugly species like beetles? It seems, at present, that — to our shame — we don’t value the existential risks to other species at a particularly high level. This is why relentless economics always needs to be tempered by some morality; because, you know, maximising the stock market does not necessarily maximise all of the things we value in the world.

And Bostrom talks about other kinds of apocalypse, as well; not necessarily a cataclysm that destroys humanity or sets civilization back by a hundred years, but instead a “flawed realization” — the human race reaches technological maturity, but in a way that’s far from ideal. So a classic example of this, I guess, would be an Earth that is environmentally destroyed, or governed by a repressive totalitarian state.

It may well be that there is a big gap between surviving the 21st century and developing interstellar travel; in which case world destruction is a serious issue. I’m reminded of so many of the sci-fi dystopias I read growing up: the Earth has been ecologically shattered, the poor people remaining there are clinging to a vastly overpopulated wasteland, and for the colonists struggling to survive on the inhospitable planets of our solar system, life is hardly any better. In Frederik Pohl’s Gateway — a brilliant book that I discussed on the Hugo’s There Podcast — everyone remaining on Earth lives in hope of winning the lottery, because that’s enough to get them a one-way ticket to Venus. There, alien ships left behind by a super-advanced race called the Heechee are sitting. The Heechee ships can be flown, but only to a set of predestined coordinates; like a vast, space-bound game of Russian roulette: the explorers may find something that makes them unimaginably wealthy, or they might just end up being killed or running out of food on a journey to nowhere. Given the poor state that the Earth is in, for most of them, this turns out to be a decent option.

How can we avoid this? And how can the global village deal with its global village idiots? One of the theories that has been suggested is the idea of a global government. Obviously, the formation of the UN, which is the closest thing we have to a world authority at the moment, has perhaps laid some of the framework for this: and a big part of the impetus for forming it was to deal with the threat from nuclear weapons and ensure non-proliferation. (Of course, the world had just seen in two disastrous wars the dangers of having individual nation states floating around.) In fact, immediately after the Second World War, there was widespread public support for the idea of a global government that would take control of nuclear weapons, and part of the militaries of the major nations, to prevent future conflicts from arising. We’ve talked about this in nuclear weapons episodes, and will do more when I get to fusion.

The advantages of some sort of global government would be that we could unilaterally address the issues that face us, the kind which affect the whole planet: we could come up with a coherent scheme for dealing with climate change, or asteroid strikes, or diseases that could become global pandemics; we could consistently enforce laws about biotechnology and nanotechnology across all jurisdictions, rather than having the hodge-podge of rules that we have at the moment. For example, having multiple nation states means that some nations are incentivised to have more lax rules about what can and can’t be done in biotechnology. You can make money by being the most lax people out there.

For example, there was a lady — Bio-Viva CEO Liz Parrish — who wanted to undergo an experimental procedure to have her telomeres lengthened. Telomeres are little loops of DNA that act like the caps of each chromosome — and, as we age, they get shorter. It’s unclear at the moment as to whether or not you can live longer by stretching them again — correlation, after all, is not always causation — but nevertheless, that’s what she wanted to do. To avoid the FDA in the US, who had not approved the experiment, she went to Colombia.

Of course, it doesn’t necessarily have to be a question of different laws in different countries giving some an economic incentive for bio-tourism, or whatever — even just differences in enforcement can be a problem. If, in the future, it becomes possible for many, many actors to illegally create or programme bioweapons, or nanorobots — even just having an anarchic state where the laws can’t be enforced properly is a threat to the entire world.

And, if you believe that a global government would also mean an end to war (as many of the utopians who first came up with the idea did), then many of the issues that arise from conflict and weaponry are severely reduced. It’s already necessary for us to have a number of agencies whose authority and influence spreads all over the world, although sometimes they’re a bit of a guise for the powerful nation-states of the day rather than genuinely representing the world’s population via democratic means.

If you take the really long view, at least to me, it seems that nation-states will eventually seem a little anachronistic. I mean, they’re founded based around things like a common culture, language, and shared heritage; but these are things that can change over time. If technology improves to the point where we genuinely have babel-fishes; live, accurate, computer-based translation that can work in real time and allow people to communicate with each other across different languages with ease… a massive barrier will come down. Already, for better or worse, a lot of the world’s culture is becoming homogenised, at least among the wealthy and well-off. The Internet has provided an instant bridge between nations. The majority of my listeners aren’t from the UK, which wouldn’t be the case if I adopted the old style of apocalyptic preaching and set up a cardboard box in the street rather than a podcast.

And until relatively recently, there was a continuous trend towards greater political union and multi-national cooperation; we’re now in something of a backlash, but who knows how long the political tide will recede for?

But people are rightly suspicious of a global government, which is really why we don’t have one yet. For a start, a lot of people have a lot of self-interested reasons to be suspicious; if everyone is going to be genuinely equally enfranchised, and have an equal say in the direction of society and the world in general — for a lot of people, that’s going to involve some pretty severe disenfranchisement.

If power is a zero-sum game, then the idea that the votes of (say) a wealthy New York elite type, and a farmer in Bangladesh, will be worth the same… it’s going to seem to the New York elite like they’re losing out. If you have a global government, what do you do about the fact that resources that people value aren’t equally allocated?

In the modern era, some countries are disproportionately wealthy because they have massive reserves of oil. Perhaps in the future, those countries close to the equator, that benefit from a lot of sunshine for solar power, will be similarly more valuable than the cold and bleak countries of Northern Europe; or maybe, as seems likely, things will swing towards a balance where the countries that have the largest populations, or produce the most food, will become the economic powerhouses. People are going to feel like they have the same rights to benefit from their country’s resources as they did in the past. Inequality between nations is likely to continue for a very long time; at the very least, for as long as it can be sustained by the people who benefit from it. While this is the case, it’s perhaps inevitable that a global government can’t truly be a global government.

And, to give anti-globalists some credit, there is a genuine and deep concern that a global government could descend into a totalitarian state; especially if people try to establish it before the world population is really ready for the idea. These circumstances could be completely nightmarish. Nick Bostrom talks about a totalitarian state as an example of one of his “flawed realisations” — and another threat for a global catastrophic risk. After all, in his definition, a catastrophic risk, involves the deaths of tens of millions of people: and this has occurred under the totalitarian governments of Mao, Stalin, and Hitler in the 20th century. A totalitarian global government could kill so many more. The idea of a global government that coordinates the response to apocalyptic threats and keeps the peace — or global cooperation that does the same thing — is grand, maybe even necessary. But the devil, as always, is in the details.

Let’s be provocative for a minute. As I write this, there’s an awful lot of talk about the US Embassy moving to Jerusalem. As I discussed in the episodes with Phil Torres — for some people, by no means all, but for some people, a key reason to support Israel is due to Biblical prophecy. For example, a Pew research poll suggested that 51% of evangelical Christians listed it as a reason, and 12% thought it was the most important reason. Similarly, 80% of evangelicals said that the existence of Israel showed that Christ’s return was getting closer. So — in a tangible way, even if it’s hard to quantify — people’s beliefs about the end of the world are currently influencing the policy of governments. A good chunk of those people probably expects the Rapture to arrive in their lifetimes.

Now — before I go any further, I should point out that the example I’m about to give is outside mainstream theology, which often quotes the bit of Bible that says that and that many religions have apocalyptic sects. But it is clear that some number of people, even if it’s very small, view the end of the world as a good thing — something to be encouraged, even hastened, because it will lead to the Second Coming and paradise and redemption and the fulfilment of prophecy and all of these things.

Imagine you’re in Bostrom’s world, where new technologies allow destructive power to be distributed in the same way that information has become distributed via the internet: and now, these people have access to the means to inflict untold horror on civilization. How do you react to that?

[ OTHER POINTS THAT PHIL TORRES MAKES IN RISKS/RELIGION, and in DM correspondence, etc?]

Keeping track of every group of people who might possibly seek to cause untold violence, carnage and destruction is one thing — that’s what we expect governments to do now. But you could just as easily imagine a global government saying — the risk is too great. We cannot allow people to believe that the apocalypse will be a good thing, because anyone who believes that might just engineer the nanobots to trigger it.

Isn’t it almost the definition of religion that you have a different set of beliefs about what’s valuable? Just in the same way as Bostrom and others might argue that future generations should be valued above all else, many people believe that there is an eternity beyond this one that far outweighs everything in this world. This belief can motivate people to be kind and self-sacrificing in their lives because they think there’s a greater reward waiting. It can also motivate them to blow themselves up. Could a world where people are far more empowered to wreak havoc allow these belief-systems, where there are things more important than — say — human life — continue to exist? But *stopping people from holding beliefs* is the worst kind of totalitarianism, and a terrible risk in itself.

We’ve already talked about how these millenarian beliefs are common. Not just religions: Marxism, and some kinds of radical environmentalist thought, have similar overtones. In fact, it’s a theme common to lots of attractive belief systems about humanity — one that we return to again and again. What do you do about these people — the people who might destroy the world?

Because, after all, doesn’t this very idea of utopia almost require certain points of view to vanish, a certain uniformity of belief about what “good” is? And, in this sense, we’re back to the problem of superintelligent AI — what to optimise, and what to programme the machines to do on our behalf, if we can even control them.

Don’t most people’s visions of utopia look more or less like a world where everyone thinks like they do — perhaps with a little variation thrown in to keep it interesting? How do you agree?

Ensuring that liberal democracy is maintained in individual nation-states has proved difficult enough; the nightmare of maintaining it across the whole world, with so many different ideologies, issues, and concerns, is even worse. Would a global parliament be able to get anything done, or would they just bicker and fight until people started to think the whole thing was a bad mistake?

The problem with totalitarianism as a risk is that these new, unprecedented technologies make it possible to have new and perhaps more effective kinds of totalitarianism. Even the relatively unintelligent AI algorithms we have today allow for a surveillance state that Stalin could only have dreamed of. It could prove very easy, in a few hundred years, to set up a totalitarian state that won’t collapse, that will prove very difficult to overthrow; one that’s semi or fully automated. Humanity might survive under such circumstances — but it would be a dismal prospect for us; not the world that we’d want.

There are different ways to think about totalitarianism as a threat. In global catastrophic risks, the Bostrom-edited tome, Bryan Caplan argues that a totalitarian regime can be a threat multiplier for other existential risks. If the world is run by one person, or a group of people, we’re subject to that person’s idiosyncrasies. Stalin didn’t believe that Hitler would invade — and so his rule was threatened by failing to prepare. Add to this the fact that, in totalitarian regimes, people can be reluctant to challenge the view of the Great Leader, and this can make the whole system less prepared for other kinds of catastrophe:

“To call attention to looming disasters verges on dissent, and dissent is dangerously close to disloyalty… For the ruling party, this may be a fair trade — greater control vs less insulation from disasters. For global catastrophic risks, we must add the direct cost of totalitarianism to the indirect cost of exacerbating other risks.”

Will totalitarian regimes be able to last longer in the future? Part of this question lies in how regimes have collapsed in the past. Caplan points out that they often struggle with succession crises — Mao gave way to Deng Xiaoping who relaxed the totalitarianism somewhat, and a similar thing happened with Stalin. He also notes that totalitarian regimes are destabilised by non-totalitarian neighbours. It’s a bind: either nations choose to completely isolate themselves, like North Korea — and hence lag economically and militarily behind the others — or they allow for trade and communication, and risk the spread of dangerous ideas. For this reason, a global totalitarianism — which we’ve never seen — could be more stable. Alternatively, like in 1984, you could have a very small group of totalitarian states that keep power by demonising and skirmishing with each other — there is still no “free” alternative. If the regime can’t be conquered from without, like Hitler’s Germany, one major way dictatorships fall is gone.

In such a situation — a global totalitarianism — there are all kinds of consequences. Scientific and economic development is often motivated by competition with others — that’s why the US sent humans to the Moon, after all. In a global state, not only is there no-one to compete with, but you might argue that technological progress could actively challenge the ruling group. Caplan puts it thus: “The rule of thumb ‘Avoid All Change’ is easier to apply than the rule ‘Avoid all change that might in the long run make the regime less likely to stay in power.’”

Technology opens the doors to new kinds of totalitarianism — like the soft totalitarianism of Brave New World. Instead of executing and torturing dissidents, they’re just given drugs to keep them docile and obedient. Uprisings or power struggles within the class of elite that forms in a totalitarian government are often the cause of the downfall — again, looking at the USSR. Could a totalitarian state genetically engineer its humans, or party members, to be unquestioningly loyal? Could it scan their brains to determine whether they harboured thoughts of dissent? Could the succession crisis be eliminated by making the supreme leader immortal in some way?


If you think that a global totalitarianism is possible, what does it do for the human species? This exacerbates some risks while dampening others. If such a regime is run by humans, and they accept that a superintelligent AI could prove to be a threat to them, they might ban AI research — maybe even in the guise of keeping us safe. They might ban nanotechnology or strictly control biotechnology — and, if you had a surveillance state with no qualms about killing people, they might even be able to enforce these bans. This, arguably, reduces some of the existental risks we’ve talked about — at the cost of a dismal future for everyone. However, if there are any dissenters who do get their hands on the technology, they might be more motivated to overthrow an evil rather than a benign dictatorship. On the other hand, if scientific development is stunted, maybe we never become an interstellar species — or this is delayed until something else manages to wipe us out.

There is, then, a clear and deep balancing act that we must try to perform. We need to find some way of globally coordinating the response to catastrophic risks — especially if technology evolves in such a way that these risks can come from a greater number of actors. Yet we also need to avoid the risk of becoming so overbearing that technological progress that might otherwise provide huge benefits is curtailed — or so paranoid about the risks that we impinge on people’s freedom. The only trouble is, I imagine, everyone will disagree about where those lines should be drawn.

And even a ‘good’, democratic global government, if it has term limits, can suffer from the same problems that our governments do today. If addressing a risk involves a big expenditure now, but won’t improve things measurably for ten or twenty years, politicians of all stripes are less inclined to vote for that over something that might have a tangible impact before the politician’s next election cycle comes up. And, on a broader scale, the finite nature of human lives has an impact on these things.

I remember when I was talking about all of these apocalyptic scenarios to my brother, and mentioned — for example — that limitless economic growth can’t continue as it has done ‘forever’, due to the fact that — eventually — our energy consumption would cause the seas to boil. This doesn’t necessarily mean that society is doomed, but it should be clear to everyone that an endlessly growing energy consumption on a finite world is, eventually, not sustainable. (See the online essay, Exponential Economist versus Logistic Physicist.)

He thought about it for a second, and said “Well, as long as it lasts the next eighty years, I’m fine with that.” What do we do about the problems that are transgenerational? Who is going to invest in interstellar space travel, if the project might take hundreds of years to yield any economic or political benefits, and the investors know that they won’t live to benefit from it?

Of course, there are ways the future can unfold that might negate some of these issues. For example, if we had an artificial intelligence that people trusted with this kind of decision-making, it might be better at long-term planning; or else, if human lifespans become extended, people — or at least the rich — might start to worry about consequences a hundred years down the road. Even something like cryogenics, if it works, could make long-term investments more palatable, and broaden people’s horizons a little. But the point, obviously, is that a global government is not necessarily the slam-dunk silver bullet to all of the problems that we face; even though, in a lot of cases, a coordinated response is necessary. And we’re really talking about something that’s incredibly impractical to establish; most of the people who’ve tried to establish one lately have been… you know… homicidal dictators like Stalin or Hitler. People are unlikely to consent to be governed by a global government unless they can be reassured that their values and interests won’t be compromised, or until our values and interests all homogenise and become very similar; and that’s not necessarily the kind of thing you can easily force — or, perhaps, the kind of world that we’d want to live in.

Let’s say you take these threats seriously — the idea that new technologies or old disasters could allow for the destruction of the species. Let’s also suppose that you believe that the survival of the species, and hence sentient life and whatever wisdom or beauty or value we’ve managed to accumulate, is the really important point here. It seems obvious that you want some kind of backstop — to ensure that if the worst happens, you’ll at least preserve the species.

So one thing we could consider is research into a kind of seedbank. Some way of ensuring that humanity, and our civilization — maybe minus Twitter — survives past whatever existential risks we can throw at it. There’s already, famously, a “seed vault” in Svalbard — which currently protects seeds of 890,000 plant species from natural or manmade disasters.

“The purpose of the Vault is to store duplicates (backups) of seed samples from the world’s crop collections. Permafrost and thick rock ensure that the seed samples will remain frozen even without power. The Vault is the ultimate insurance policy for the world’s food supply, offering options for future generations to overcome the challenges of climate change and population growth. It will secure, for centuries, millions of seeds representing every important crop variety available in the world today. It is the final back up.”

The idea is, whatever happens, the genetic information that’s associated with those important species — wheat, maize, etc — it will all be preserved in the event of some apocalyptic disaster. You’re saving the end product of millions of years of evolution for a speedier reboot. Could we do something similar with humans?

You could imagine that it might be possible to set up bunkers that could preserve certain numbers of people in the event of a huge calamity. Given relevant future technology, you could even make the whole thing entirely automated. You could have some kind of underground Ark where, following some calamitous event on the surface — maybe hooked up to a “Dead Planet’s Switch” that kicks into gear when there are no signs of life on Earth — and the machines spring to life, reviving preserved humans or making new ones to recolonise the Earth — hopefully with enough individuals to avoid a nasty genetic bottleneck. You’d hopefully be able to give them a bit of a kickstart over the original human race with some of the knowledge that humanity 1.0 managed to accrue in the days before the damage. They first survive, then thrive, and, eventually, colonise other planets — fulfilling that “manifest destiny” of an interplanetary civilization that everyone’s so keen on. Naturally, this raises all kinds of ethical concerns about who gets preserved, whether it’s fair to force these future generations to live in incredibly diminished circumstances, or perhaps exist just to clean up the planet from whatever catastrophe befell it — but, if you zoom out far enough, as this logic requires you to do, then it’s a net positive if it allows for the species to survive and hence for many billions of conscious lives to exist that wouldn’t have happened otherwise. Even if things get a little dicey around the catastrophic event, for an interstellar species, the home planet is like an ancient creation myth, or a tiny part of a long and storied history.

Of course, it depends on what *kind* of catastrophe you’re talking about. A bio-engineered pandemic, nuclear war, or runaway climate change might be the kind of thing you could survive in a bunker. You could probably survive a gamma-ray burst or asteroid strike in this way. On the other hand, rogue nanobots — or a malicious artificial intelligence or alien species that wants to destroy the human race — may be more difficult to endure. If the threat is intelligent, they might know about the bunker — and, at any rate, could remain deadly to humans for a very long time after more transient problems would have passed by. Even carbon dioxide is removed from the atmosphere after a few millennia; not so for malign AI.

Yet if your main priority is the survival of the species as a whole, and you could set up and maintain such an insurance-policy type system for a tiny fraction of global wealth — why wouldn’t you do it? Even if it only reduced the risk of human extinction by a tiny amount, it still seems like a pretty worthwhile deal.

One of the more interesting projects I discovered while researching whether any such project existed was a defunct idea that some billionaires had back in 2015 to send a copy of the Jewish sacred text, the Torah, to the Moon. Since one can imagine plenty of catastrophes that might involve widespread destruction on Earth, but leave the Moon unscathed — and we can imagine that it might soon be possible for us to send large amounts of information, and perhaps physical materials, to the Moon.

Alexey Turchin, just this year, expanded this idea in a paper — “Surviving global risks through the preservation of humanity’s data on the Moon”. Samples of human genetic and cultural information could be stored on the Moon — where they might remain undisturbed for millions of years. The idea then is that — if some cataclysm wiped out or severely damaged the human race — the survivors who rebuild civilization, or intelligent life-forms that evolve after us, might be able to access that information so that all would not be lost. There is an internet archive next to the Svalbard Seed Centre that might store our online data for a thousand years. Memory of Mankind is a project that aims to encode some relevant human information on clay plates and bury them in a salt mine. The Human Document project aims to find ways to store data for a million years for future historians; while the Keo project wants to build a satellite that will return to Earth in 50,000 years.

We all know that there’s a small group of rich tech billionaires who lie somewhere on the scale between visionaries and people with more money than sense. This seems like exactly the kind of project that they’d be interested in — getting to secure your legacy as a forward-thinking sci-fi benevolent captain of industry, and claim that you’re securing the future of the human race while you’re at it. I’m sure someone will get on it soon.

[Seedbank]

[Genome project]
[Bio-engineer humans?
 if we are just genetic information….
 could you have a seed? How many people would you need? How much genetic information? C.f. population bottleneck
 automated human-factory that kicks in if the worst arises, hooked up to a dead man’s switch…

[Post-apocalyptic bunkers]

The post-apocalyptic bunker / human-factory that can revive the species from its doldrums might be a nice reassuring solution if what you’re concerned about is the million-year future of humanity. But if you’re concerned about avoiding millions of deaths, unimaginable suffering and misery, then you’ll want to head catastrophic risks off at the pass.

I’m not going to pretend I have the solutions to each of the problems we’ve discussed. Thousands of people far smarter than me devote their lives to trying to solve aspects of each one. But there are some general things that I think will be helpful — and they start with actually listening to those experts. One of the most frustrating things for me to watch lately, in US politics and in my home country, is the war on expertise. Trump has fired the people in charge of pandemic response and response to cyberwarfare. Here in the UK, we’re told that “people are tired of experts.”

Guess what? We can’t expect average citizens to know the best course of action against nuclear war, artificial intelligence, cyberwarfare, pandemics, nanotechnology, bioengineered viruses, climate change, and natural disasters. These problems can only be solved by investing time, money, and human skills into understanding and keeping track of them: they can only be solved by communicating the issues clearly to decision-makers. We have to get wiser than we are at the moment.

Instead of top-down impositions from an authority that we’re going to find it bloody hard to agree on, one method that will make it easier to address a lot of the problems we face is ensuring some kind of uniformity of progress. It’s the same issue we have in the fight against pandemics; the fact that some countries have poor public health services winds up being a problem for everybody, because if there is a risk of some infectious disease crossing over to a human population because sanitary conditions are poor somewhere, it can affect everyone in a globalised world. Systems have become more and more interconnected than they ever were before; we saw this in the financial sector, when instabilities in the US sub-prime mortgage market led to global turmoil because of the complex ways that things are interconnected, and led to the global economic downturn in 2007–8. Since this trend is so difficult to reverse, I hope that it will gradually become clear to people that progress being spread out a little more evenly — or at the very least, preventing anyone from falling too far behind — is beneficial to everyone. After all, this was a key motivator behind creating the welfare state in Britain; society as a whole benefits if everyone has enough to eat.

Similarly, if progress is a little more universal, and resources more available to everyone, then there might be less potential for conflict which could lead to existential risks with nuclear weapons, biotechnology, or cyberwarfare. If progress is a little more universal, natural disasters like earthquakes and supervolcanoes won’t disproportionately kill people in poor countries. The earthquake that struck Haiti was 7.0 in magnitude; a similar earthquake in a richer country would undoubtedly have killed fewer people, but in Haiti, it led to hundreds of thousands of deaths due to the relative poverty of that country. If progress is a little more universal, there might be fewer disaffected groups that would want to launch a biotech virus. Restrictions on things like carbon emissions would seem fairer and not unfairly restricting the economic growth of less economically developed countries. This is what I mean when I say that, in many ways, the story of the 21st century is going to be one of whether our morality, our society, our intelligence, and our kindness can keep pace with the rapid evolution of our technology. If we can move towards a more equal society, not just in our own countries but globally, it will surely be better for everyone — for our species in the long run. Right now, of course, it could go either way.

Of course, there is one final crucial component to this discussion. This is the simple question of “How do we talk about this?”

This is something people in the climate-change world have been grappling with for a very long time. Recently, people have tried to break away from the “five-minutes-to-midnight” narrative. When a scientist issues a dire warning, it can often get taken out of context. Part of this is by sincere people who are genuinely concerned that everyone appreciates the importance of the issue; there is also an undercurrent of people wanting to sell newspapers or get clicks.

Climate scientists, for years, have been talking about our “last chance to act”, and using science to create a picture of what could happen if we don’t. But the problem is that it’s such a long-term, intergenerational issue. These projections are often for the end of the century — yet the emissions we have today lock us into future climate change and remain in the atmosphere for a thousand years. We know that decarbonising the world economy is going to be an immense undertaking that will take decades to accomplish. Emissions need to start falling *now* for us to have any hope of doing that before locking in more damage, but instead they flatline, or continue to rise.

The issue is one in communication. If you go too heavy on the doom-and-gloom, it can go wrong. People might decide that you’re exaggerating the threat — years of warnings and they don’t see anything too apocalyptic happening. Or they might become depressed and decide that the problem is too difficult to solve, and bury their heads in the sand, or decide that we’ll just have to adapt to it.

The reality is that *everything* positive we do means less damage in the future. If we miss the Paris climate goals, it’s better to miss them by a little than a lot. But motivating people to do everything they can to help in this endeavour is a really difficult balancing act between optimism (which risks complacency) and pessimism, or maybe realism, which risks despair.

So the field of existential risks studies has come in for some criticism by a group of people called the ‘New Optimists’ — Stephen Pinker is key amongst this group. The essential point that many of them are trying to make is that, on the whole, the world has got better in the 20th and 21st century.

This is perfectly true; for the vast majority of people, life is better in a lot of ways. In 1820, most of the world lived in what could today be considered extreme poverty; now it’s less than 10%. Child mortality has declined, healthcare has improved, since 1945 there have been fewer wars; more and more people are better-educated than ever before. The quality of life for the vast majority of people has improved vastly over the last century.

There are also new problems that our ancestors didn’t have to deal with, but I think a rational observer in most places would prefer to live now than in the Middle Ages.

And we should certainly keep in mind how far we’ve come before declaring that modern life is rubbish or looking at the past through rose-tinted spectacles. But Pinker and others — in defence of their optimistic thesis — feel the need to downplay catastrophic risks.

But conflating this general optimism about the way that things are going for the species as a whole with a conclusion that there aren’t new and bigger risks than ever before is an error in judgement. We’re talking about the *probability* of events that haven’t happened yet.

We can’t fall into the hindsight fallacy. Just because we’ve avoided risks so far, doesn’t mean that this was in any way inevitable. Part of why I’m so fascinated with nuclear weapons and the Cold War is putting yourself in the shoes of people who lived at the time — the first time we invented a technology that threatened an unimaginable, swift catastrophe for the species. Look how close they thought we came to annihilation. Look how close we did come. Survival is not guaranteed. We could’ve just got lucky. And the fact that a smaller fraction of people than ever before lives in extreme poverty is great, and should be celebrated — as well as motivating us to do something to lift the remainder out of poverty — but it doesn’t have anything to do with the risk of some terrible catastrophe. The fact that, in the 1940s-1950s, the majority of the world’s population could read and write for the first time — this is good, but hardly comparable to the fact that nuclear weapons were invented in those years. These are apples and oranges.

It’s a simple fallacy. The two can be true at the same time: we can be living in the best time for humans to be alive, with the most advantages for the most people across a wide swathe of metrics that we’ve ever had — and it can also be the most dangerous and risky time for the human species.

And the two are completely intertwined in an obvious way. It’s what you might call “dual-use” technologies; that give power to do both good and evil. An interconnected world can improve cultural understanding and reduce conflicts; it also means that a local collapse of a mortgage market in the US can become a global financial crisis that collapses economies across the world. New technologies give us new power to lift people out of poverty and improve our own lifestyles — and maybe, eventually, fulfil that dream of living lives of leisure — but they create new problems and risks. Some of those are existential risks.

Do developments in artificial intelligence mean that the world will inevitably be destroyed by a misaligned AI? Of course not. Do they make this possibility more likely? Hard to argue that they don’t — in the same way as the invention of nuclear weapons, while not guaranteeing anything, made more devastating and deadly wars than the world had ever known a real possibility.

But nuclear weapons are different in kind to threats we might face in the future. If destructive power falls into a greater and greater number of hands, such as through nanotechnology, biotech, or certain kinds of AI, then we need to be certain that these agents can be trusted. This is from an article by Phil Torres, who listeners will remember we interviewed on existential risks:

“Thinking about this situation more abstractly, John Sotos has recently crunched some numbers to show that the distribution of offensive capabilities could all but guarantee civilizational collapse. For example, a 1 in 100 chance that only a few hundred agents releasing a pandemic-level pathogen yields almost inevitable doom with 100 years or so. If the total number of people who can cause global-scale harm rises to 100,000, the probability of any one person releasing such a pathogen must be less than 1 in 1⁰⁹ for civilization to survive a single millennium. In other words, low probabilities can add up pretty quickly as the total number of individuals capable of mass destruction increases.”

Past performance is no predictor of future results. And sadly, it seems that there’s no shortage of people who might be willing to pull this kind of trigger.

As Phil Torres points out in his excellent rebuttal to this kind of thinking — which I’ll link to in the show-notes — it’s wrong to think that people concerned with catastrophic risks are doom-and-gloom types. If you were a hidebound pessimist, and you saw that technology was the source of many potential risks, you’d be a Luddite — but most of these people aren’t. They call for safe artificial intelligence, not none; safe biotech, rather than none; safe nanotechnology, rather than none. Many are convinced that the whole reason we should be so worried about human extinction now is because they feel that it would scupper our inevitable path towards becoming “more than human” — a sci-fi-esque species that spreads amongst the stars. If anything, I think many people who worry about this kind of stuff are too optimistic in their projections for technology. I would urge us to accept the shades of grey; the possibility for futures that we can’t imagine that are weird compromises between sci-fi dreams and the problems of today; and avoid millenarian thinking that argues everything either leads to paradise or the apocalypse.

The criticism of Pinker and others does serve to make a valuable point in our communications strategy; whenever we raise awareness of this or that threat or risk, rather than ending it with doom-and-gloom, we should talk about the good things that we can do to alleviate these risks, to help people understand them, and to create a more resilient society.

Imagining these doom-laden scenarios is not an exercise in scaring ourselves. Instead, it’s the best way to make sure that our technology is safe. Even if the risks I’ve spent months discussing are all exaggerated, society needs a few Chicken Littles; a thorough investigation of whether or not the sky is falling in costs little and has enormous potential benefits. We should highlight the problems and pitfalls of new technologies. We should indulge our overactive imaginations. If you only realise the problem with some new technology after it’s widespread, it’s already too late. You can argue it through a sort of Pascal’s Wager: if you’re wrong to be cautious about new developments, the worst that happens is you lose some time when you could’ve benefited from that technology. If you’re wrong to be gung-ho, the consequences could be far worse. And even if these technologies are years, decades, or centuries away from being dangerous in the way that we imagine they might be today — the safety problems could take years, decades, or centuries to solve. We lose nothing by thinking about how technologies might be made safe before we invent the technologies. In fact, it’s crazy not to do that.

The most compelling part of Pinker’s argument is the idea that — if we ladle on the doom and gloom too heavily — we might not respond to the risks particularly well. People might become despondent, and feel like the species is doomed, and that’s a state of mind that doesn’t motivate action — that makes these risks less likely to be addressed, if they are real. He argues that: “Sowing fear about hypothetical disasters, far from safeguarding the future of humanity, can endanger it.”

To this, I’d say a few things.

#1) People having a serious, reasoned, and thoughtful discussion about potentials and pitfalls — about the future of our species, and the challenges we face — cannot be a negative and bad thing. I don’t think these things are taken seriously enough, and they’re certainly not well-understood enough. There are lots of people doing great work to explain the threats posed by climate change, nuclear weapons, superintelligent AI, whatever else — but existential risks still seem to be pretty low on the priority list for governments, who are focused on re-election, and corporations, who are focused on profit.

#2) The mature, responsible response to something scary or a challenge that you face isn’t to give up, but to take concrete action that might help you to succeed. If we assume that we are the kind of people who will — when facing real dangers — bury our heads in the sand or lapse into despondency, then we’re not giving ourselves the best chance to deal with these problems. And consider the alternative; should climate scientists not raise the alarm about climate change, because it will upset people? If people studying artificial intelligence think it could have a negative impact, should they carry on regardless?

If we overstate risks to the point that people think doom and gloom is inevitable, then we’ve certainly failed. And it’s certainly true that assessing the risks from superintelligent AI, or nanotechnology, or other future technologies that simply don’t yet exist is very, very difficult. Our own internal cognitive biases work against us here: studies conducted by Nobel-Prize winning economist Daniel Kanhemann and others have shown that — by describing a scenario in more detail — people automatically feel that it’s more likely. We’re not good at assessing probabilities in our day-to-day lives — let alone when they involve facts that we don’t know yet. So perhaps the risk from AI or nanotech or biotech is very small. But assuming that it’s zero is a pretty dangerous game to play. Some people assumed that it would be impossible to make a nuclear bomb; yet here we are, perpetually dangling on the precipice, with most major cities in the world a few bad decisions away from annihilation.

I hope that a reasonable study of everything that’s out there would lead someone to conclude that the risks are real, even if they disagree about how important they might be. If the alternative is to say “The world’s never ended before, so why should we believe it might now?” then you run the risk of making a terrible miscalculation. Our strategy should always be to emphasise what can be done to help fix the problems, and the agency that we have to fix the problems.

Learning even a little about this has taught me that the biggest risks to our survival are human-made risks. These risks are an inevitable flip-side to the new power that we are harnessing with our technology. We built them: we can take them apart. The future is ours: if we can be smart enough and wise enough to keep it.

www.physicalattraction.libsyn.com