True AI and why it’ll ignore us

It must be about twenty years since I encountered a CD burner labelled (wait for it) “Smart and friendly”. Back then the art of burning CD’s was sufficiently arcane and difficult that even the hardware manufacturers had to pitch their wares as “smart”.

Let’s send a mission off to meet aliens, run by an experimental AI. I mean, what could POSSIBLY go wrong? “Discovery” from 2001: A Space Odyssey. A picture I made with my Celestia software.

It wasn’t, of course – it was a dumb piece of hardware that relied on third-party software which was about as user-friendly as a starving crocodile. (These days, of course, we say “CD? What’s that?”) Which brings me to the point of the post. Machine smarts. Computers were meant to be intelligent by now – weren’t they? Like us. Or at least R. Daneel Olivaw. Or Orac. Or Hal.

Actually, today’s artificial intelligences… aren’t. What we have today isn’t even remotely close to the way AI has been portrayed in fiction. Oh, we get the illusion of intellect, even the illusion of creativity. But it’s all emulation. And yes, I know that Google’s translate system includes what appears to be a self-generated intermediate language to translate with, but it’s not self-aware.

It’s odd. We were meant to have true artificial general intelligence by now… weren’t we? I mean, it was meant to be an automatic outcome of computing, just as flight was an automatic outcome of cars and the advent of TV would automatically destroy cinemas. Yeah – you get the idea.

The problem wasn’t that something went wrong – it was that we had the wrong idea about what computing technology was capable of in the first place, mostly built around the idea that the brain worked the same way.

Actually, there’s good evidence that our conceptual model of mind, widespread since at least the mid-twentieth century, is dead wrong. It’s easy to suppose that intellect is a product of processing power – that the brain works like a kind of giant computer, and if we slam enough computing power into a small enough space we can generate the computing grunt needed to run artificially intelligent software.

But what say it isn’t? What say consciousness is an emergent property of physical biology – indivisible from it, in the sense that it can’t exist independently of that biology – but more than just a pile of switching systems with instructions on how to configure the switches? That last is what computers are (the instructions, which we call the ‘operating system’ and ‘applications’, along with ‘data’, are stored on the hard drive or solid-state storage- every computer today is a ‘stored program’ computer).

But what say consciousness isn’t like this at all? What say it’s a second-order product of a brain-and-body-together system where it’s primarily generated as the outcome of a succession of rapid system/state-changes in the brain? What say that the intelligence displayed by that consciousness is far more than a single ‘number value’ like IQ, and its expression is unique to each species? Humans have human-style (specifically, ‘ape’) intelligence; elephants have ‘elephant’ intelligence, and crows solve logic problems because they (literally) have bird-brains. Computers work in a different way again.

The implication is that we’ll never actually get those intelligent computers after all – still less turn ourselves into them. We may well find a way of inventing a general machine intelligence, sure. Maybe we’ll even develop a self-aware system that re-writes its own software into more complex forms, swiftly generating that ‘singularity’ that’s been so often predicted. I’m dubious – not least because I think the notion of an inevitable ‘singularity’ is based on a philosophical false-premise. But it’s possible. What then?

The reality is that it’ll be an intelligence that works in ways utterly differently from the way we do. It won’t even have the same underpinnings – it’ll be machine-oriented, not biology-oriented. The old sci-fi trope of self-aware robots being limited by literalism (per the Star Trek TOS episode ‘I, Mudd’) won’t even raise its head. We’ll be lucky to even comprehend what the AI is doing. And would such an intellect be likely to ‘take over’? I think it would be more likely to disregard us as irrelevant.

Any thoughts? Let’s discuss. And if you want to check out my concept for a sci-fi AI, check out my short story ‘Missionary‘, available in the first Endless Worlds compilation, on Amazon.

Copyright © Matthew Wright 2017


6 thoughts on “True AI and why it’ll ignore us

  1. I wish I could remember her name, but a leading researcher in the field of consciousness studies said that, no matter how much detail we might discover concerning the physical operation of our brains and their constituent neurons, we may never know the subjective nature of this thing we call consciousness.

    1. I am certain we won’t, because consciousness is part of the system we are trying to analyse with consciousness. It’s mathematically demonstrable that a system cannot totally analyse itself. Of course, that doesn’t stop the attempt – which I suspect might produce something serendipitous, which we hadn’t anticipated when posing either the problem or the question we thought might solve it.

  2. If we don’t understand the origination of human awareness, how will we be able to originate self-awareness in an “artificial” sense?

    A related question would require that we be able to distinguish a truly self-aware organism from one so well-programmed that we, at some contemporary level of expertise, cannot distinguish one from the other.

    To what extent is self-awareness a subjective phenomenon not amenable to experimental verification? Is it possible to arrive at a definition of self-awareness such that experimental verification is possible?

    As the above indicates, I think we’re still in the stage of asking questions rather than arriving at answers. So we should keep asking questions!

    1. Yes we should! I understand it’s mathematically provable that it’s impossible to fully analyse a system of which you are a part (something I also think has been shown in softer form through philosophy, if only they’d stop whacking each other with fire-pokers). But that doesn’t stop our sense of inquisitiveness – and even if we can’t ever get a ‘final’ answer on the consciousness issue, I think we can certainly learn a great deal more than we know now. What’s needed is an analytically valid line of enquiry that isn’t imprisoned by the current paradigm. It may well kick up nothing – but ANY appropriately founded answer to an unanswered question, providing we accept its conditions and qualifications, is better than NO answer. Um… probably (see what I did there… :-))

Comments are closed.