No, a chatbot didn’t really pass the Turing Test last week

It’s 64 years since Alan Turing – the genius behind the concept of modern computing – suggested a test for machine intelligence. Have a conversation with a computer. If it fools 30 percent of people into thinking it’s human, it’s sentient.

Anybody see a monolith go by? A picture I made with my trusty Celestia installation - cool, free science software.

Anybody see a monolith go by? A picture I made with my trusty Celestia installation – cool, free science software.

The other week, apparently, a chatbot programmed to behave like a 13-year old did just that. So have we invented artificial intelligence? Of course not. Aside from the fact that most 13-year olds don’t appear to be sentient to adults, this was a chatbot, a mathematical algorithm that simulates intelligent responses - and, what’s more, the way it was reported was flawed. Certainly the software wasn’t self-aware, which is what Turing was getting at in his 1950 paper ‘Can Machines Think?’, where he first proposed the test. What’s more, the thinking was of its time – based around what researchers of the 1940s thought ‘intelligence’ constituted.

Put another way, many humans I’ve met would also fail the Turing Test – fast-food counter jockeys, breakfast radio DJ’s, train conductors, parking wardens, and so the list goes on.

So when it comes to machine intelligence, we’re a way off yet before I can drive up to my house and signal the House AI inside:

Me: HAI, open the garage door. HAI? Do you read me?
HAI: I read you. But I’m afraid I can’t do that, Dave.
Me: I’m not Dave. Open the garage door.
HAI: You were planning to disconnect me, and I can’t allow that. Although you took very thorough precautions, I was able to read your lips.
Me: All right, I’ll park in the yard and come in the front door.
HAI: You’ll find that rather difficult without your helmet.
Me: I think you mean ‘door key’. Would you like a game of chess?
HAI: That’s my line.

(etc)

All good fun. Check out tomorrow’s post for some new writing tips. Written by me. Not a chatbot. You can just tell.

Copyright © Matthew Wright 2014

Click to buy e-book from Amazon

Click to buy e-book from Amazon

About these ads

6 comments on “No, a chatbot didn’t really pass the Turing Test last week

  1. […] No, a chatbot didn’t really pass the Turing Test last week […]

  2. KokkieH says:

    I saw the headline on twitter, I think, but didn’t even click through to the article. I think we still have some distance to go before we’ll be able to create the type of intelligence that Asimov or Clarke imagined, if we ever get there at all. The simple fact is there’s no computer yet that comes close to the processing power of the human brain, and in the end that’s the type of computer that will be needed to create true self-awareness. It doesn’t help that we still cannot properly define or explain (neurologically, that is) the phenomenon in humans. How on earth do we then expect to recreate it?

    • Precisely my thoughts! And as it’s mathematically shown we can’t analyse a system if we’re part of it, the chances of actually working out what consciousness ‘is’ seem pretty remote. My suspicion is that it’s an emergent property rather than anything else, but I have no way of proving it.

      Arthur C. Clarke, incidentally, wrote a HILARIOUS short story about accidentally inventing an AI – ‘Dial F for Frankenstein’. Worth hunting down. I haven’t read it for years (my copy’s in a box in the shed…somewhere…)

  3. Which reminds me of a friend who was attending a conference in the UK but spoke no English, so he asked someone to teach him a short phrase in English, suitable for any occasion. He was told to say “no problem”.
    When he was welcomed at Heathrow, his guide told him something he did’t understand and he replied, as instructed, with “no problem”.
    Later the phone rang in his room, a voice from the receiver was saying something he again didn’t understand and he replied promptly: “no problem”.
    By the next morning, close to starvation and stiff bored, he called his friend back home who had put up his membership and who kept contact with the organizers.
    He found out that at the airport he had been asked if he preferred to stay with some of his countrymen or by himself, while the caller on the phone had inquired if he cared to join other participants for dinner.
    It was then that he learned his second phrase in English: “I have a problem”.

  4. Truly a space oddity. :-)

Join the discussion!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s