One of the main tropes of science fiction has to be the self-aware robot or computer – one mobile, the other not, but both presented as self-aware and able to think as we do, although often better.
Often, Frankenstein-style, the AI develops malevolence. That was a trope long before HAL; virtually all of Asimov’s robot stories from the 1940s onwards were designed to counter the notion of the AI turning on its creators. Asimov’s answer – which, apparently, was proposed to him by John W. Campbell – were the ‘laws of robotics’ in which machines simply couldn’t harm humans.
Inevitably, these laws didn’t work, and Asimov knew it; a lot of his stories involved finding ways that the laws failed. He spelled out the main point of failure in one of the final robot novels: all the builder had to do was program a different definition of ‘human’ into a robot. And more recently, work on robots has shown that such laws are impractical, not least because they require value judgements which current machine technology cannot provide.
That also highlights the other problem – for all the work done to date and all the conceits of creating ‘smart’ machines, the AI we’ve come up with is nothing like the notion of a self-aware, thinking machine in our image, mentally or otherwise. And part of the reason is that, to this day, nobody actually knows where consciousness comes from, how it’s generated, or even what it actually is. Oh, we have some very good guesses and theories. But actually knowing? No.
Back in the twentieth century it seemed much simpler. The brain was apparently a very large and complex switching machine, like a phone exchange. Build enough switches and you could reproduce it. Arthur C. Clarke played with that idea in his ‘Dial F for Frankenstein’, a hilarious short story in which all the world’s phone systems were linked up… and developed consciousness as soon as the linking system was switched on.
The advent of computers merely shifted the ground rules of the model; even 1940s computers had more complex switching apparatus than a phone exchange of the day, and so clearly the brain was actually a giant computer. Self-awareness and intellect were thought to be products of that switching, and all we had to do was build increasingly sophisticated hardware, and hey presto – self-aware AI. Later, when stored programme computers were developed (which is what we refer to as a ‘computer’ today), it became merely a matter of adding suitable software to that hardware, and voila. I mean, how hard could it be? It might easily be achieved by – oh, let’s say 2001.
This concept has been persistent, not just in terms of driving science-fiction visions of what constitutes AI, but also driving real-world expectations. It’s what has created the idea of the ‘singularity’, for instance, in which computers will somehow have the ability to receive an ‘upload’ of human consciousness and memory, rendering us immortal.
As far as I can tell that’s wishful thinking, because the basic model around which it’s formulated – that consciousness is the equivalent of an ultra-complex software programme running on ultra-complex hardware – is flawed. The fact is that despite all the work done to date on AI, what we’ve created is still not self-aware and still can’t think for itself. Efforts to build machines such as chatbots that mimic the response we’d expect from the Turing test – in which a conversation is all that suffices to determine if the respondent is self-aware – have had mixed results; but ultimately there’s no evidence that they are anything other than sophisticated rote responses. We also have algorithms that can do increasingly sophisticated things, including what appears to be creativity. But none of it can think for itself as we do. That said, the sophistication of the AI systems we have is significant. AI, in short, is being achieved in many ways. But it’s something very different from what we imagined.
Meanwhile, nobody has found what consciousness actually is, still less what constitutes intelligence. The evidence currently points towards self-awareness being a second-order effect, an emergent property rather than a fundamental product of, for instance, neuron switching. It cannot be defined by the ‘hardware’ and ‘software’ alone. But nobody quite knows. Nor, mathematically, is it likely to be answered; there’s that annoying principle where a system cannot fully analyse itself.
On that basis, I’m inclined to think we’ll end up with some quite sophisticated AI systems before too long, but they won’t be like us, they won’t think like us (or even, necessarily ‘think’), and they won’t be self-aware in ways that we understand. What’s more, the ‘singularity’, with immortality via computer upload, is a fantasy.
Copyright © Matthew Wright 2018