I have long been intrigued by the way we imagine ‘artificial general intelligence’ as some kind of inevitable future. One, perhaps, where humans – too – can be transferred into machines.
You’ll notice I used ‘general’ in that phrase. We throw plain ‘artificial intelligence’ (AI) around with reckless glee these days, meaning things like chatbots. But they’re one-trick ponies: a true artificial intelligence, allegedly, is capable of the same general abilities that we have.
The idea, of course, builds on the supposition that the brain and the computer work in much the same way – that ‘consciousness’ and ‘intelligence’ are products of circuit complexity and scale. This has been a central concept to the ‘model’ of mind for over a century. It began as a comparison with phone exchanges, and when computers came along the model was transferred to them. It’s since become axiomatic to assume that the brain works like a computer; or that computers, somehow, can be made in our own image. The ‘rogue AI’ trope in sci-fi is one example of how it’s enviegled its way into popular thinking.
There are many things wrong with that idea. Let’s start with the way AI research was pursued from the mid-1950s at the various Dartmouth Conferences and other get-togethers, culminating in Marvin Minsky establishing an AI lab at MIT in 1959. Back then the supposition was that it would take a grad student a few weeks over summer to show how it worked. Well, that didn’t happen.
The problem was twofold: a disastrous misconception about consciousness, and another disastrous notion of what constituted intelligence.
Let’s start with consciousness. We don’t know what generates it. There is some evidence of conscious awareness being an emergent property – a second-order product of the mind and body together. This, among other things, creates the illusion of ‘spirit’, independent of any physical housing. That isn’t so: consciousness relies absolutely on physicality to exist. But the relationship between ‘consciousness’ and how it’s generated is more complex than switch-flipping (which is how computers work). The body is also turning out to be a complete biochemical system. You know the adage about ‘gut feeling’? Well, turns out the gut has a huge biochemical influence on our sense of presence and much else.
The other issue is what constitutes ‘intelligence’. The usual western view is anthropocentric – a continuum of general ability from a dim awareness among worms, running on an ascending scale to humans (naturally) at the top. This too is turning out to be untrue: the way we think is particular to our biology, just as the way elephants think is particular to theirs. What’s more, the size of the brain was meant to be an arbiter of smartness – hence the supposition that ‘cavemen’ were ‘stupid’. And hence the surprise when it turned out that tiny-brained crows can solve logic puzzles. So can parrots – evidenced by the way New Zealand kea regularly wreak havoc with tourist property left in cars. And yet the term ‘bird brain’ is a perjorative. Really?
Could it be that birds just think differently?
The concept of ranked ‘single-factor’ intelligence was given cultural power by the ‘IQ’ test – a pernicious form of psychometrics, cooked up by ‘psychologists’ such as Alfred Binet (1857-1911) and since used largely (as far as I can tell) as another way of bashing people into pre-defined boxes. If you couldn’t answer the culturally-framed questions set by the ‘psychologist’, you were stupid. Or dyslexic, because IQ tests didn’t take cognitive differences into account either.
The conceit that these limitations have been overcome in ‘modern’ tests doesn’t reduce the false-premise on which ‘IQ’ rests in the first place. Intelligence is a multi-factor phenomenon. It cannot be reduced to a number, particularly one predicated on western conceits. Intelligence is mediated by many factors, including cognition. Are dyslexics stupid? Of course not.
Machines can do particular things far better than humans can. I have a computer that can recall data perfectly and churn it back, including the data required for it to operate. Does this make a computer ‘more intelligent’ than a human?
When that’s mixed with the fact that machines are artefacts – not biological – it becomes clear that (a) we don’t know how consciousness is generated; other than (b) it’s a product of whatever is physically producing it, but not in the direct sense of circuit-switching; (c) because our consciousness is biochemically based, that ‘emergent’ side is likely to be whole-body, not limited to a single organ (even if one organ is more important in the process); (d) we don’t have a proper definition of ‘intelligence’ – other than the fact that it is multi-factor and likely to be species-specific. From which we may conclude that, (d) on that basis, biological intelligence by nature will be radically different from machine intelligence in all respects.
All this casts doubt on the notion of a ‘singularity’ in which human consciousness can be transferred into a machine, however much computing power or memory the machine has.
As for the nature of machine intelligence? That’s for another post.
Copyright © Matthew Wright 2017