Why the singularity won’t happen any time soon. Or at all.

I have long been intrigued by the way we imagine ‘artificial general intelligence’ as some kind of inevitable future. One, perhaps, where humans – too – can be transferred into machines.

Charles Darwin before thinking about intelligence (detail from watercolour by G. Richmond, public domain, via Wikipedia).

You’ll notice I used ‘general’ in that phrase. We throw plain ‘artificial intelligence’ (AI) around with reckless glee these days, meaning things like chatbots. But they’re one-trick ponies: a true artificial intelligence, allegedly, is capable of the same general abilities that we have.

The idea, of course, builds on the supposition that the brain and the computer work in much the same way – that ‘consciousness’ and ‘intelligence’ are products of circuit complexity and scale. This has been a central concept to the ‘model’ of mind for over a century. It began as a comparison with phone exchanges, and when computers came along the model was transferred to them. It’s since become axiomatic to assume that the brain works like a computer; or that computers, somehow, can be made in our own image. The ‘rogue AI’ trope in sci-fi is one example of how it’s enviegled its way into popular thinking.

There are many things wrong with that idea.  Let’s start with the way AI research was pursued from the mid-1950s at the various Dartmouth Conferences and other get-togethers, culminating in Marvin Minsky establishing an AI lab at MIT in 1959. Back then the supposition was that it would take a grad student a few weeks over summer to show how it worked. Well, that didn’t happen.

The problem was twofold: a disastrous misconception about consciousness, and another disastrous notion of what constituted intelligence.

Let’s start with consciousness. We don’t know what generates it. There is some evidence of conscious awareness being an emergent property – a second-order product of the mind and body together. This, among other things, creates the illusion of ‘spirit’, independent of any physical housing. That isn’t so: consciousness relies absolutely on physicality to exist. But the relationship between ‘consciousness’ and how it’s generated is more complex than switch-flipping (which is how computers work). The body is also turning out to be a complete biochemical system. You know the adage about ‘gut feeling’? Well, turns out the gut has a huge biochemical influence on our sense of presence and much else.

Charles Darwin after thinking about intelligence. Public domain, via Wikimedia Commons.

The other issue is what constitutes ‘intelligence’. The usual western view is anthropocentric – a continuum of general ability from a dim awareness among worms, running on an ascending scale to humans (naturally) at the top. This too is turning out to be untrue: the way we think is particular to our biology, just as the way elephants think is particular to theirs. What’s more, the size of the brain was meant to be an arbiter of smartness – hence the supposition that ‘cavemen’ were ‘stupid’. And hence the surprise when it turned out that tiny-brained crows can solve logic puzzles. So can parrots –  evidenced by the way New Zealand kea regularly wreak havoc with tourist property left in cars. And yet the term ‘bird brain’ is a perjorative. Really?

Could it be that birds just think differently?

The concept of ranked ‘single-factor’ intelligence was given cultural power by the ‘IQ’ test – a pernicious form of psychometrics, cooked up by ‘psychologists’ such as Alfred Binet (1857-1911) and since used largely (as far as I can tell) as another way of bashing people into pre-defined boxes.  If you couldn’t answer the culturally-framed questions set by the ‘psychologist’, you were stupid. Or dyslexic, because IQ tests didn’t take cognitive differences into account either.

New Zealand Kea (Nestor notabilis) – one very smart parrot. Public domain, photo by Alan Liefting.

The conceit that these limitations have been overcome in ‘modern’ tests doesn’t reduce the false-premise on which ‘IQ’ rests in the first place. Intelligence is a multi-factor phenomenon. It cannot be reduced to a number, particularly one predicated on western conceits. Intelligence is mediated by many factors, including cognition. Are dyslexics stupid? Of course not.

Machines can do particular things far better than humans can. I have a computer that can recall data perfectly and churn it back, including the data required for it to operate. Does this make a computer ‘more intelligent’ than a human?

When that’s mixed with the fact that machines are artefacts – not biological – it becomes clear that (a) we don’t know how consciousness is generated; other than (b) it’s a product of whatever is physically producing it, but not in the direct sense of circuit-switching; (c) because our consciousness is biochemically based, that ‘emergent’ side is likely to be whole-body, not limited to a single organ (even if one organ is more important in the process); (d) we don’t have a proper definition of ‘intelligence’ – other than the fact that it is multi-factor and likely to be species-specific. From which we may conclude that, (d) on that basis, biological intelligence by nature will be radically different from machine intelligence in all respects.

All this casts doubt on the notion of a ‘singularity’ in which human consciousness can be transferred into a machine, however much computing power or memory the machine has.

As for the nature of machine intelligence? That’s for another post.


Copyright © Matthew Wright 2017


6 thoughts on “Why the singularity won’t happen any time soon. Or at all.

  1. Brilliant article! I wish you’d posted this a few weeks back when I was having an argument with [some] sci-fi writers about the possibility of pouring a human brain into a machine and thus creating a form of ‘immortality’. I know it’s a common, and popular, trope, but I hate it in much the same way I hate the concept of FTL. Just because we can imagine something doesn’t mean it exists or might exist in a dim and distant future. I can imagine fairies at the bottom of my garden but it doesn’t mean they’re there. 😦

    1. Thanks – sounds like we’re on the same page when it comes to the practical realities of AI. I really think science has got the wrong ‘model’ for how the mind works – and have had since forever. There are dissenters from it – but I’ve also read reports from advocates, and I can’t help thinking that while the notion of ‘brain as super-computer’ is mainstream, we won’t actually make much progress inventing a real AI.

      1. A few weeks back I read an interesting article about how we, as in humanity, always express our concepts of the brain/mind in terms of current technology. As the computational model has been around for a while, it’s inevitable that we use it to ‘visualise’ our knowledge of ourselves. Plus there’s the old saying about how we can only ever invent things ‘in our own image’.
        But, the more we learn about how the brain really works, the less appropriate the computational model becomes. As you pointed out, our thinking and feeling owes as much to the physical as it does to logic. More so, in most cases.
        For me though, the greatest point of difference is that at the most mechanistic level, our brains require electrical and chemical processes to work in tandem – e.g. the synapse.
        To lapse into my own version of cultural thinking, it’s like the difference between digital and analogue sound, or jpegs vs bmps. We will always be ‘analogue’. 🙂

  2. You can see AI in video games, to some extent, and it’s hardly promising at the best of times, but one day I do hope a robot will be able to cook my dinner.

    1. I have one cooking my dinner now – well, I had to fill the crock pot up myself and turn it on… but it has an ‘auto’ setting which until a few minutes ago, as I write this, held every promise of delivering me a perfectly cooked dinner. Lesson of the day: never trust the ‘auto’ setting and check crock pots more often.

  3. A couple of points:

    Once upon a time it was believed that taking a daily bath would cause one to become ill.

    And that traveling faster than a horse could run would make it impossible to breathe.

    Of course both of the above concepts were proven to be false so I wouldn’t completely rule out the possibility of AI just yet.

    On the other hand IQ tests can only measure the knowledge an individual has accumulated; while intelligence is the ability to process and make use of that knowledge as well as make leaps of logic which can only be observed and not measured.

Comments are closed.