I have watched AI pronouncements over the last year or so with great interest, like so many others.
It is 30 years since I argued at a robotics and AI conference, much to the horror and anguish of many computer science colleagues, that AI was better interpreted as artificial incompetence than intelligence.
Has anything really changed in that time?
In my 1992 book, Shear Magic, Robots for Shearing Sheep, I argued that so-called intelligent computers were illusions and that the greatest irony of artificial intelligence research is that it demonstrates how shallow our concepts of mind and intelligence really are. “All we have learned is that the thinking we associate with intelligence is the easiest part to replicate with computers.” Yet, every human shares perception and thinking abilities that, even now, we have not even begun to understand. If you think ChatGPT is intelligent, just ask it to drive your car to the office.
I must acknowledge that the ability of our machines to translate text into other useful languages has advanced. If you are very careful with the original text to avoid ambiguities and colloquial expressions, writing text that is as boring to read as possible, then translation into many languages is near faultless. Instruction manuals and legal documents translate easily, even with Google translate. I use DeepL considered to be better than Google, though with fewer languages.
In this post, I want to explain why I think that we are not going to see many of the great AI advances so many people have confidently predicted in the last year or two. Not for a while at least. I have fallen into the same trap myself: I confidently predicted that self-driving cars would be an every-day reality by 2017!
My argument is based on simple economics that I have learned by stumbling into marketing to help the world embrace Coolzy.
Every summer day, my team members scan digital dashboards to assess ROAS, our return on advertising spend. These days we place most of our advertising with Google through search ads, YouTube, and the shopping strip at the top of your search page, also display ads that appear on so many websites.
In Australia, for example, we aim for a ROAS of about five, meaning that we have to spend $100 on digital advertising to generate $500 of sales revenue. In Pakistan and Indonesia where Coolzy is so much more attractive, we can confidently aim for ROAS of 20 or more, sometimes more than 50. The reason why Google, Meta and the other vast digital platforms are so profitable is that advertising with them really works.
Let’s explore this a little deeper.
Typically, we pay Google or Meta around one dollar every time someone clicks on one of our ads and lands at our website. We pay a tiny fraction of that every time one of our ads is displayed on a screen, somewhere in the world.
Yet only one in a hundred website visitors buys a Coolzy, perhaps two on a good day with a threatening heatwave announced. That’s why we have to pay for those other hundred or so ad clicks.
If, like me, you enjoy tossing provocative questions at ChatGPT and Gemini, you are benefiting from the money we pay to Google for our ads. It is millions of companies like us, large and small, paying for digital advertising, that have made Google what it is today. Google and Meta live on advertising revenue. And the world only has so much money to spend on advertising.
A few days ago, Google announced their next big step in AI… Gemini. I asked it my usual questions like “tell me about Pakistani members of the Australian cricket team”. Gemini matched ChatGPT’s response last year, naming Usman Khawaja and Fawad Ahmed, an improvement on Bard that completely flunked the answer. In contrast, Bing’s new ChatGPT copilot only listed members of the Pakistan cricket team in its response this morning, a big backward step from ChatGPT last year. Many others have made similar comments.
Google, Meta, OpenAi and so many others are chewing through vast amounts of electricity and investors’ cash, running hundreds of thousands of nVidia gaming chips to build what are now known as large language models (LLMs), essentially vast networks of mathematical statistics that predict the next few words you are looking for without any understanding of what the words mean. The models emerge as they scoop up and process trillions of words from websites across the internet. Machine translation abilities rely on huge collections of documents appearing simultaneously in two or more languages. The UN and EU websites are goldmines for translation engines, reflecting the efforts of countless human translators over the last few decades.
Like many others now, I suspect that this huge expenditure of treasure and energy will disappoint in the end. Vast investments are vanishing like water into sand in the hope that some huge advance in advertising effectiveness will emerge, because it is only advertising that will sustain the successful winner in this race. And, something drastic has to change to make the economics add up because these LLMs are enormously more costly to run than traditional search engines.
So, back to Coolzy.
What would persuade me to pay $10 or even $50 to Google, Microsoft or even Amazon, for someone to tap on a Coolzy ad on their smart phone?
I would do that if, and only if the person that taps the ad is really going to buy a Coolzy. That means that Google, Microsoft and Amazon have to predict human behaviour ten times, or fifity times better than they can now. And I can’t see any sign of that kind of improvement, yet.
Recently I came across perplexity.ai, initially impressing me. I asked questions about Coolzy and its responses were so good that I am tempted to recommend it to our website visitors if our primitive chatbot can’t answer their questions. Perplexity have announced a copilot that engages a user in a conversation to help narrow down the focus of their ‘knowledge’ search. I thought to myself, Ah ha, this might lead be a search engine that really can find someone ready to buy a Coolzy. If it works for us by finding people ready to buy a Coolzy, I would pay far more than Google for their website visitors. The perplexity business model just might eclipse Google, or so I thought.
I put perplexity’s copilot through an extended test. I pretended to be someone looking for a low-power aircon that works in tropical humid heat, but with no knowledge of Coolzy. Sadly, perplexity’s copilot failed. I was more confused and frustrated by the experience than helped. Despite telling me that evaporative aircons don’t work in high humidity, it kept recommending the tiny USB-powered so-called air conditioners like evapolar that use water evaporation, and are little more than toys. They only work in low humidity and even then, only produce a tiny cooling effect.
Thinking about this, I realized that perplexity has perplexed itself because it cannot distinguish truth from faction, the vast quantity of facile text created by marketers to drive search engines to misleading websites, building upon confusing ideas that even engineers cling to about air conditioning. Hence artificial incompetence.
For a decade or more, commercial and respectable website builders alike have been seeking Google search rankings that depend on vast amounts of text that mention something that might be relevant for a potential visitor, but the text does not need to be either factual or logical. Now, one of the main applications being touted for ChatGPT and Gemini is producing faction even faster to attract search engines, increasing the pile of meaningless internet content exponentially.
I am helping to build a website about the history of engineering in Western Australia. Despite the large quantity of carefully researched text there, Google ignores it because it thinks the site is unlikely to attract a paying customer. Perplexity knows about it, but not Google.
LLMs, therefore, seem to have become imprisoned by the marketing industry that has created vast quantities of meaningless text to promote website Google search rankings. LLMs are not much good at generating anything logical anyway. They regurgitate a digested form of the garbage that represents so much of the internet today. In universities, we struggle to help our students distinguish the small quantities of reliable information out there.
Even academic publishing has become an ever-growing archive of papers, most of vanishing significance, that few people ever read apart from the authors. Academics are rewarded for publishing papers, not reading them.
LLMs will need to be able to distinguish truth and logic from faction if they are to provide anything reliably helpful. And that will take a long time, I suspect.
I often wonder whether the AI hype all been a ploy by the IT industry to seduce investors once more. The industry has monopolized the venture capital supply for two decades without creating appreciable productivity gains. The transition to renewables and electric vehicles is taking an ever-increasing share of investment capital, casting shade over silicon valley’s cathedrals.
Roger Penrose argued that biological intelligence relies on quantum effects (see his book, The Emperor’s New Mind). He inspired physicists to work on quantum computing, until recently flagged as the next great step in AI. However, I suspect that practical applications are still decades away.
Is AI really taking human civilization to the next inflection point?
Yawn.
It’s time to talk about Coolzy. No LLM will keep billions of people cool in the coming century.
Image by Anca Gabriela at umsplash.com
PS: WordPress now offers to generate a summary of my post (presumably using AI). I tried it an immediately discarded the result which was so mind-numbingly more boring than my own writing. If that’s the future of writing, the internet will become humanity’s greatest garbage dump even faster! Bring back books, please.