This forms part of the Drexler and Xiaoice episode, first released here.

General AI May Not Look Like Us
Who’s Afraid of Artificial Intelligence?

The question of whether a general artificial intelligence could be developed in the near future — and, if so, when this might arrive — is a controversial one. Some futurists point to Moore’s Law and the increasing capacity of machine-learning algorithms to suggest that a more general breakthrough is just around the corner.

Others suggest that extrapolating exponential improvements in hardware is unwise, and that — say, creating narrow algorithms that can beat humans at individual, specialised tasks — brings you no closer to a “general intelligence”.

It seems difficult to rule it out as impossible. After all, evolution has produced minds like the human mind at least once. Surely we could create artificial intelligence simply by copying nature: either by guided evolution of simple algorithms, or wholesale emulation of the human brain?

Both of these ideas are far, far easier to conceive of than they are to actually achieve. The 302 neurons of the nematode worm’s brain are still an extremely difficult engineering challenge, let alone the 86 billion in a human brain. It’s unknown what level of detail would need to be captured to re-create a human intelligence, or if it were even possible. One — very uncertain — estimate suggests that 2070 might be the earliest we could expect to see such technology.

Leaving aside these caveats, though, a great many people are worried about general artificial intelligence. Essentially, the fears go like this. We imagine the algorithm as an agent: an intelligence with a specific goal. Once a human-level intelligence is developed, it will improve itself — increasingly rapidly as it gets smarter — in pursuit of whatever goal it has, and this “recursive self-improvement” will lead it to become superintelligent.

This “intelligence explosion” could catch humans off-guard. If the initial goal is poorly specified, or if improper safety features are in place, or if the AI decides it would prefer to do something else instead, humans may be unable to control their own creation.

The Philosopher’s Playground

The idea of a superintelligent AI like this is really a philosopher’s playground. Nick Bostrom’s book on Superintelligence explores these themes, which amount to “What would you tell a God to do, if you could give it instructions?” Perhaps you decide to tell the AI to “Make everyone happy”, and it decides the best way to do this is to replace humanity with simulated brains endlessly experiencing a single moment of ecstasy on loop. Perhaps its initial goal involves making money, or paperclips, or calculating digits of pi, and it converts most of our world into computing infrastructure in a single-minded, maniacal pursuit of this goal.

Thinking about this allows for fascinating, philosophical speculation about what a “true system” of human morality should look like, what it means to be intelligent, and what it means to be conscious: what we “want” as humans, and how to explain these concepts to a mind that looks nothing like our own. Evidently it’s not a straight-forward question — even defining the terms of this discussion are difficult. But is it the right question to be asking?

Drexler and Comprehensive AI Services

There are dissenters to this picture of how general artificial intelligence might arise. One notable alternative point of view comes from Eric Drexler, famous for his work on molecular nanotechnology and Engines of Creation, the book that popularised it.

With respect to AI, Drexler believes our view of an artificial intelligence as a single “agent” that acts to maximise a specific goal is too narrow — almost anthropomorphising AI, or modelling it as as a more realistic route towards general intelligence. Instead, he proposes “Comprehensive AI Services” as an alternative route to general artificial intelligence.

What does this mean? Drexler’s argument is that we should look more closely at how machine learning and artificial intelligence algorithms are actually being developed in the real world. No-one is close to developing a general intelligence, and there’s not much money to be made in simulating the brains of worms, or developing software that can perform inadequately at many tasks.

Instead, the optimisation effort is going into producing algorithms that can provide services and perform tasks — like translation, music recommendations, classifications, medical diagnoses, and so forth. AI-driven improvements in technology, argues Drexler, will lead to a proliferation of different algorithms: technology and software improvement, which can automate increasingly more complicated tasks. Recursive improvement in this regime is already occuring — take the newer versions of AlphaGo, which can learn to improve themselves by playing against previous versions.

Many smart arms, no smart brain

Instead of relying on some unforseen breakthrough, the CAIS model of AI just assumes that specialised, narrow AI will continue to improve at performing each of its tasks, and the range of tasks that machine-learning algorithms will be able to perform will become wider. Ultimately, once a sufficient number of tasks have been automated, the services that an AI will provide will be so comprehensive that they will resemble a general intelligence.

One could then imagine a “general” intelligence as simply an algorithm that is extremely good at matching the task you ask it to perform to the specialised service algorithm that can perform that task. Rather than acting like a single brain that strives to achieve a particular goal, the central AI would be more like a search engine — looking through the tasks it can perform to find the closest match, and calling upon a series of subroutines to achieve the goal.

For Drexler, this is inherently a safety feature. Rather than Bostrom’s single, impenetrable, conscious and superintelligent brain — that we must try to psychoanalyse in advance without really knowing what it will look like — we have a network of capabilities. If you don’t want your system to perform certain tasks, you can simply cut it off from access to those services. There is no superintelligent consciousness to outwit or outmanouver or “trap”: more like an extremely high-level programming language that can respond to complicated commands by calling upon one of the myriad specialised algorithms that have been developed by various different groups.

This skirts around the complex problem of consciousness — and all of the sticky moral quandaries that arise in making minds that might be like ours. After all, if you could simulate a human mind, you could simulate it experiencing unimaginable pain. Black Mirror-esque dystopias where emulated minds have no rights and are regularly “erased”, or forced to labour in dull and repetitive tasks, hove into view: “mind crime”. Drexler argues that, in this services model, there is no need to ever build a conscious algorithm. Yet it seems likely that, at some point, humans will attempt to simulate our own brains — if only in the vain attempt to pursue immortality. This model cannot hold forever. Yet its proponents argue that any world in which we could develop a superintelligent general AI would probably also have developed superintelligent capabilities in a huge range of different tasks, such as computer programming, natural language understanding, and so on. In other words, CAIS arrives first.

The Future In Our Hands?

Drexler argues that his model already incorporates many of the ideas from general AI development. In the marketplace, algorithms compete all the time to perform these services: they undergo the same evolutionary pressures that lead to “higher intelligence” — but the behaviour that’s considered superior is chosen by humans, and the nature of the “general intelligence” is far more shaped by human decision-making and human programmers. Development in AI services could still be rapid and disruptive, without requiring a conscious agent. But in Drexler’s case, the R + D capacity comes from the planet as a whole, from humans and organisations, driven by the desire to improve algorithms that are performing individualised and useful tasks, rather than from that conscious agent recursively reprogramming and improving itself.

In other words, this vision does not absolve us from the responsibility of making our AI safe — if anything, it gives us a greater degree of responsibility. As more and more complex “services” are automated, performing what used to be human jobs at superhuman speed, the economic disruption will be severe. Equally, as machine-learning is trusted to carry out more complex decisions, avoiding algorithmic bias becomes crucial. Shaping each of these individual decision-makers — and trying to predict the complex ways that they might interact with each other — is no less daunting a task than specifying the goal for a hypothetical, superintelligent, God-like AI. Arguably, the consequences of the “misalignment” of these services algorithms are already multiplying around us.
The CAIS model bridges the gap between real-world AI and machine-learning developments — and the real-world safety considerations — and the more hypothetical and speculative world of superintelligent agents, and the safety considerations involved with controlling their behaviour. We should keep our minds open as to what form AI and machine learning will take, and how it will influence our societies. Nothing is inevitable — except that we must take care to ensure that the systems we create don’t end up forcing us all to live in a world of unintended consequences.