The Great Artificial Intelligence Scam (Again)

{A longer version of this post appeared in the Australian Financial Review on August 18th under the title “When robots learn to lie, then we worry about AI“.}

Great claims are being made for artificial intelligence these days: AI.

Amazon’s Alexa, Google’s assistant, Apple’s Siri: these are all claimed as examples of AI.  Yet speech recognition is hardly new: we have seen steady improvements in software like Dragon for 20 years.

We have seen claims that AI with new breakthroughs like ‘deep learning’ could displace 2 million or more Australian workers from their jobs by 2030.

I was fortunate to discuss artificial intelligence with a philosopher, Julius Kovesi, in the 1970s as I led the team that eventually developed sheep shearing robots.  With great insight, he argued that robots, in essence, were built on similar principles to common toilet cisterns and were nothing more than simple automatons.  “Show me a robot that deliberately tells you a lie to manipulate your behaviour, and then I will accept you have artificial intelligence!” he exclaimed.

That’s the last thing we wanted in a sheep shearing robot, of course.

To understand the future prospects for artificial intelligence, learn to see it as just another way of programming digital computers.  That’s all it is, for the time being.

We have been living with computers for many decades.  Gradually, we are all becoming more dependent on computers and they are getting easier to use.  Smart phones are a good example.  They change the way we live and work and can also disrupt sleep and social lives if we let them, but so can many other things too.

Therefore, claims that we are now at “a convergence” where AI is going to fundamentally change everything are hard to accept.

We have seen periodic surges in AI hyperbole. In the 1960s, machine translation of natural language was “just two or three years away”.  And we still have a long way to go with that one.  In the late 1970s and early 1980s, many believed forecasts that 95% of factory jobs would be eliminated by the mid-1990s.  And we still have a long way to go with that one too.  The “dot gone” boom of 2001 saw another surge in claims.  Disappointment has followed as claims faded in the light of reality.  And it will happen again.

Self-driving cars will soon be a reality, thanks to painstaking advances in sensor technology, computer hardware and software engineering.  You can call it AI if you like, but it does not change anything fundamental.

The real casualty in all this hysteria is our appreciation of human intelligences… plural. For artificial intelligence has only replicated performances like masterful game playing and mathematical theorem proving, or even legal and medical deduction.  These are performances we associate with intelligent people.  Consider performances easily mastered by people labelled as the least intelligent, like figuring out what is and is not safe to sit on, or telling jokes.  Cognitive scientists are still struggling to comprehend how we could begin to replicate these.

Even animal intelligence defies us, as we discovered when MIT scientists perfected an artificial dog’s nose sensitive enough to detect TNT vapour from buried landmines.  When tested in the field, this device detected TNT everywhere, yet dogs could locate the mines in a matter of minutes.

It might be worth asking whether the current surge of interest in AI being promoted by companies like Google and Facebook is a deliberate attempt to seduce investors or simply another instance of self-deception group-think.


  1. Refreshing for once to see someone make this argument. AI is hyped over the edge and the more I’m looking into it, the clearer it becomes.

    Do a simple search on “Elon Musk predicts” and you’ll be flooded with news articles from otherwise respectable outlets citing him for every other prediction about AI, WWIII and the end of human jobs without challenging him on those assumptions.

    Tesla, Facebook and all the other AI driven companies (which means most tech companies) have an inherent interest in exaggerating the capabilities of AI.

    I spoke with a couple of lesser known AI researchers and entrepreneurs who shake their heads to these predictions. AI has progressed a lot in natural language processing (understanding symbol patterns) but no one knows how to crack the nut of natural language understanding (semantics and the meaning of symbols), which you need to do if you want to make human level AI.

    So you get these almost-okay machine translations that somehow still don’t “get it” the way humans do. They’re simply missing common sense.

    Like this picture:

    As far as I know, artificial neural networks were already built in the 1950’s. What has happened since then is a refinement of existing methods, plus exponentially increasing computing power and a flood of data. Nothing fundamentally “new” however.

    It’s my understanding that we need something completely different in order to make that quantum leap from narrow AI to general AI. Probably we won’t even be able to make that leap. What do you think?

    /Tech journalist, author of upcoming book about AI and the future of human skills –


    • Thanks for the comment – to me the obvious question is why would we need that “general AI”. The more you think about language translation, the more you realise that just translating the words is never enough.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s