It must be about twenty years since I encountered a CD burner labelled (wait for it) “Smart and friendly”. Back then the art of burning CD’s was sufficiently arcane and difficult that even the hardware manufacturers had to pitch their wares as “smart”.
It wasn’t, of course – it was a dumb piece of hardware that relied on third-party software which was about as user-friendly as a starving crocodile. (These days, of course, we say “CD? What’s that?”) Which brings me to the point of the post. Machine smarts. Computers were meant to be intelligent by now – weren’t they? Like us. Or at least R. Daneel Olivaw. Or Orac. Or Hal.
Actually, today’s artificial intelligences… aren’t. What we have today isn’t even remotely close to the way AI has been portrayed in fiction. Oh, we get the illusion of intellect, even the illusion of creativity. But it’s all emulation. And yes, I know that Google’s translate system includes what appears to be a self-generated intermediate language to translate with, but it’s not self-aware.
It’s odd. We were meant to have true artificial general intelligence by now… weren’t we? I mean, it was meant to be an automatic outcome of computing, just as flight was an automatic outcome of cars and the advent of TV would automatically destroy cinemas. Yeah – you get the idea.
The problem wasn’t that something went wrong – it was that we had the wrong idea about what computing technology was capable of in the first place, mostly built around the idea that the brain worked the same way.
Actually, there’s good evidence that our conceptual model of mind, widespread since at least the mid-twentieth century, is dead wrong. It’s easy to suppose that intellect is a product of processing power – that the brain works like a kind of giant computer, and if we slam enough computing power into a small enough space we can generate the computing grunt needed to run artificially intelligent software.
But what say it isn’t? What say consciousness is an emergent property of physical biology – indivisible from it, in the sense that it can’t exist independently of that biology – but more than just a pile of switching systems with instructions on how to configure the switches? That last is what computers are (the instructions, which we call the ‘operating system’ and ‘applications’, along with ‘data’, are stored on the hard drive or solid-state storage- every computer today is a ‘stored program’ computer).
But what say consciousness isn’t like this at all? What say it’s a second-order product of a brain-and-body-together system where it’s primarily generated as the outcome of a succession of rapid system/state-changes in the brain? What say that the intelligence displayed by that consciousness is far more than a single ‘number value’ like IQ, and its expression is unique to each species? Humans have human-style (specifically, ‘ape’) intelligence; elephants have ‘elephant’ intelligence, and crows solve logic problems because they (literally) have bird-brains. Computers work in a different way again.
The implication is that we’ll never actually get those intelligent computers after all – still less turn ourselves into them. We may well find a way of inventing a general machine intelligence, sure. Maybe we’ll even develop a self-aware system that re-writes its own software into more complex forms, swiftly generating that ‘singularity’ that’s been so often predicted. I’m dubious – not least because I think the notion of an inevitable ‘singularity’ is based on a philosophical false-premise. But it’s possible. What then?
The reality is that it’ll be an intelligence that works in ways utterly differently from the way we do. It won’t even have the same underpinnings – it’ll be machine-oriented, not biology-oriented. The old sci-fi trope of self-aware robots being limited by literalism (per the Star Trek TOS episode ‘I, Mudd’) won’t even raise its head. We’ll be lucky to even comprehend what the AI is doing. And would such an intellect be likely to ‘take over’? I think it would be more likely to disregard us as irrelevant.
Any thoughts? Let’s discuss. And if you want to check out my concept for a sci-fi AI, check out my short story ‘Missionary‘, available in the first Endless Worlds compilation, on Amazon.
Copyright © Matthew Wright 2017