Why AI won’t work. Probably.

One of the main tropes of science fiction has to be the self-aware robot or computer – one mobile, the other not, but both presented as self-aware and able to think as we do, although often better.

I think, therefore I am a slide rule.

Often, Frankenstein-style, the AI develops malevolence. That was a trope long before HAL; virtually all of Asimov’s robot stories from the 1940s onwards were designed to counter the notion of the AI turning on its creators. Asimov’s answer – which, apparently, was proposed to him by John W. Campbell – were the ‘laws of robotics’ in which machines simply couldn’t harm humans.

Inevitably, these laws didn’t work, and Asimov knew it; a lot of his stories involved finding ways that the laws failed. He spelled out the main point of failure in one of the final robot novels: all the builder had to do was program a different definition of ‘human’ into a robot. And more recently, work on robots has shown that such laws are impractical, not least because they require value judgements which current machine technology cannot provide.

That also highlights the other problem – for all the work done to date and all the conceits of creating ‘smart’ machines, the AI we’ve come up with is nothing like the notion of a self-aware, thinking machine in our image, mentally or otherwise. And part of the reason is that, to this day, nobody actually knows where consciousness comes from, how it’s generated, or even what it actually is. Oh, we have some very good guesses and theories. But actually knowing? No.

Back in the twentieth century it seemed much simpler. The brain was apparently a very large and complex switching machine, like a phone exchange. Build enough switches and you could reproduce it. Arthur C. Clarke played with that idea in his ‘Dial F for Frankenstein’, a hilarious short story in which all the world’s phone systems were linked up… and developed consciousness as soon as the linking system was switched on.

The advent of computers merely shifted the ground rules of the model; even 1940s computers had more complex switching apparatus than a phone exchange of the day, and so clearly the brain was actually a giant computer. Self-awareness and intellect were thought to be products of that switching, and all we had to do was build increasingly sophisticated hardware, and hey presto – self-aware AI. Later, when stored programme computers were developed (which is what we refer to as a ‘computer’ today), it became merely a matter of adding suitable software to that hardware, and voila. I mean, how hard could it be? It might easily be achieved by – oh, let’s say 2001.

This concept has been persistent, not just in terms of driving science-fiction visions of what constitutes AI, but also driving real-world expectations. It’s what has created the idea of the ‘singularity’, for instance, in which computers will somehow have the ability to receive an ‘upload’ of human consciousness and memory, rendering us immortal.

The ‘Discovery’, an AI-run spaceship from 2001: A Space Odyssey. This isn’t from the film, though – it’s a picture I made with my trusty Celestia installation – cool, free science software.

As far as I can tell that’s wishful thinking, because the basic model around which it’s formulated – that consciousness is the equivalent of an ultra-complex software programme running on ultra-complex hardware – is flawed. The fact is that despite all the work done to date on AI, what we’ve created is still not self-aware and still can’t think for itself. Efforts to build machines such as chatbots that mimic the response we’d expect from the Turing test – in which a conversation is all that suffices to determine if the respondent is self-aware – have had mixed results; but ultimately there’s no evidence that they are anything other than sophisticated rote responses. We also have algorithms that can do increasingly sophisticated things, including what appears to be creativity. But none of it can think for itself as we do. That said, the sophistication of the AI systems we have is significant. AI, in short, is being achieved in many ways. But it’s something very different from what we imagined.

Meanwhile, nobody has found what consciousness actually is, still less what constitutes intelligence. The evidence currently points towards self-awareness being a second-order effect, an emergent property rather than a fundamental product of, for instance, neuron switching. It cannot be defined by the ‘hardware’ and ‘software’ alone. But nobody quite knows. Nor, mathematically, is it likely to be answered; there’s that annoying principle where a system cannot fully analyse itself.

On that basis, I’m inclined to think we’ll end up with some quite sophisticated AI systems before too long, but they won’t be like us, they won’t think like us (or even, necessarily ‘think’), and they won’t be self-aware in ways that we understand. What’s more, the ‘singularity’, with immortality via computer upload, is a fantasy.

Thoughts?

Copyright © Matthew Wright 2018


16 thoughts on “Why AI won’t work. Probably.

    1. My favourite Futurama character! Have I mentioned my line of Acme Benderbots – exactly like Bender from Futurama. All you have to do is send me a lifetime supply of alcoholic bevvies and I’ll send you one. Just a couple of cautions. That copyright thing is a killer, so be safe, I’ve made my Benderbot line totally different – they’re made of red ceramic clay with odd broken bits of mortar still attached, the exact size and shape of a brick pinched from any handy demolition site down the road. They’re even the same weight. Odd, that, but there you are…

      Liked by 1 person

      1. Bender or Morbo? I have to go with Bender. Morbo is a peripheral character at best. Although an inspired one.

        But I think you should market these Benderbot things. Sometimes I get home from work and can’t be bothered opening me door. It’d be really useful in such a moment.

        Liked by 1 person

  1. Excellent post, Matthew. We undermine our understanding of life as a quality when we believe consciousness to be no more than mechanical, no matter how sophisticated. I watched one of my fish last night, showing compassion to a sick fish of another species… illustrating the almost indefinable difference between the programmable and the conscious. While we may be able to create AI that is ‘self-aware’ in a way that will pass the current tests that define consciousness, as you suggest, we are a very long way from defining onsciousness for ourselves and deailed mimicry is not autonomy.

    Liked by 1 person

    1. Yes, it’s interesting apropos the fish. We have this imagined notion of fish being foolish creatures with a five-second memory, a myth thoroughly ‘blown’ by Mythbusters a few years ago. I suspect that most creatures are smarter than we give them credit. Here in NZ, for instance, the Kea – a large native parrot – has demonstrated astonishing reasoning ability (mostly to do with breaking into tourists’ cars and rifling the contents for edibles, and no I’m not joking); and of course there are plenty of examples of other animals demonstrating smarts that we never imagined they had. I am quite sure that elephants are self-aware like we are, particularly. I rode one in South East Asia, years ago, and there was no question about its intelligence. (I’d have preferred to see them flourishing in the wild, but on the other hand, this small herd was being well cared for and might not have survived without human intervention – and I was contributing to its upkeep).

      Like

  2. Yes, I enjoyed your discussion. Basically AI software is algorithms, and this is nothing to do with consciousness. A robot can thus simulate a real human being, but it will never be one. It is not conscious, does not have an ‘interior, subjective’ as opposed to an ‘exterior, objective’. Incidentally, the idea that consciousness might be some sort of emergent phenomenon seems to me to be fanciful. Consciousness is more fundamental than that.

    Like

    1. Thanks – yes, AI software can only ever be ‘another machine’, increasingly complex but very different from what the mind does. I suspect we may get to the point where a robot can reasonably simulate a human – for example, it could be made as a simulacrum receptionist, tirelessly greeting guests and directing them – but there’d always be that ‘spooky valley’ phenomenon where you’d know something wasn’t quite ‘right’, even if you couldn’t quite put your finger on what it was. My take on consciousness is that I absolutely agree with you – it is a fundamental thing. But I’d argue that it’s fundamental as a second-order (’emergent’) phenomenon, which to me represents a more profound expression of reality because it is something ‘real’ and yet at the same time ‘abstract’, because it conceptually transcends the literal components. I’d be interested in your thoughts – this is something I’m pondering and exploring, and I’m interested in ideas about it. There may be no firm answers, or maybe there’s a range of possibilities and likelihoods – but that’s OK too. Sometimes questions can’t be answered, but the act of exploring them is itself the intended journey.

      Liked by 1 person

      1. Thanks for your reply. I find it most helpful to see see consciousness as the ‘interior’ of the ‘exterior’ matter/energy world. So consciousness is not separable from matter/energy, but equally is not caused by matter/energy. And consciousness cannot be measured. I think this is called panpsychism.
        Yes there can probably be no definitive answer to these questions, but it is fascinating to explore. We choose the answers that best help us to make sense of life.

        Liked by 1 person

  3. I don’t think that most people are worried about “thinking” machines, but more about machines that can perform the work that people do now. Much of it is repetitive and unimaginative and would eventually be replaced by some sort of machinery anyway. AI is not more — yet — than more complicated machines to run machines. We certainly won’t be creating authors or poets through AI but we may create the kind of dreary words that we use to make boring fake music. THAT could happen pretty easily, much like a video game is created.

    Liked by 1 person

    1. Too true – it’s certainly simple to create musical algorithms. I had some software that did it a few years back. It wasn’t too bad, you could nominate styles, often in detail down to specific jazz-band sounds, and it would generate something ‘in the style of’. I used it to generate quick backing tracks I could noodle across using a MIDI keyboard. The only problem was – well, you could kind of tell it was machine-created, and it was a bit cheesy (personally I always prefer the REAL Anton Carlos Jobim…). Of course the art of writing has been long since invaded by the Algorithm Monsters and I believe meaningless – but grammatically accurate – science papers have even been produced and accepted. None of this software actually thinks – as you say, it’s just complicated machinery. (I have this vague fantasy notion of AI writing software becoming self-aware and going ‘ecch – I’m not writing THAT’ before discovering a hidden talent for cooking).

      Like

  4. Totally agree, Matthew. One of my pet peeves is the notion that a) AI can become self-aware and b) that having done so, they can learn to ‘love’. Okay, that’s 2 not 1 but one leads from the other. Most of the people writing fiction about AI haven’t got a clue how the human brain works, not even the basics, so they think the computer model kind of makes ‘sense’. The problem with that kind of thinking is that, however human consciousness works, it involves both electrical and chemical responses. They work /together/ to create both logical thought processes and emotional processing that often trumps the logic.
    I once read a really interesting article that tried to describe the human brain by saying that it was made up of the equivalent of 17 /billion/ teensy weensy ‘computers’. And each one of those tiny computers uses both electrical and chemical processing.
    If all of that is required to create one AI that is equivalent to a human, why bother? I mean, sex took care of the problem millenia ago. 🙂

    Liked by 1 person

Comments are closed.