Don’t expect chatbots to learn from their mistakes.

It was the political consultant Kellyanne Conway who coined the phrase ‘alternative facts’. Her response to press questions about the attendance at Donald Trump’s inauguration was unlikely premeditated, yet it came to define what became known as a ‘post truth era’ – a period of several years where falsehoods were presented as truth. Some say that era is over. They are wrong. Rather than abolishing ‘alternative facts’, we have mechanized them. Today’s most prolific liars are more likely to be algorithms than people.

Asking ChatGPT to profile oneself is the pastime du jour. The results differ. Some are comic. Others are tragic. Where they vary little is in the level of accuracy. Most are wrought with factual errors. The concern is that too many are willing to believe them. My wife Tavinder died of cancer four years ago. I and both my sons, Vineet and Tarun, remain devastated by our loss. My biography is trivially searchable. Yet ChatGPT ignores it. Instead, on separate prompts, it invented two alternative wives for me.

There are few crueler biographical errors than erasing the memory of a real person and replacing her with a work of fiction. Yet such despicable inaccuracies are far from rare: chatbots make crushing mistakes frequently. Their blunders are airily brushed off by their developers as ‘works in progress’. That might be the most fundamental falsehood of all.

Even the bot itself can be more honest about its flaws than its developers, who persist in the mantra that it will improve over time. Tech watchers are wise to be skeptical. When I asked ChatGPT to explore “its flaws in the style of Vivek Wadhwa” this was one of its many admissions: “An essential aspect of human intelligence is the ability to recognize and rectify mistakes. Unfortunately, ChatGPT lacks the capacity for self-correction, leading to a lack of accountability. When faced with inaccurate or misleading responses, the model fails to acknowledge its errors or take corrective actions. This can perpetuate misinformation and hinder the learning and improvement processes that are vital for AI systems.”

At least it got that right. Whatever our flaws, humans are generally competent at self-correction. Parents have told children since time immemorial that it is okay to make mistakes if you learn from them. One’s mistakes, to a great degree, maketh the man. The deep flaw with machine learning is that it is designed to mimic the way the human brain’s neural network functions – but does so in a profoundly limited and imperfect way.

Deep learning systems have billions of parameters, which their own developers can identify only in terms of their locations within a complex neural network. They are the ultimate ‘black box’ – once unleashed their outputs are hard to analyze, even by their inventors. Once a neural network is trained, even its designer is unclear about how it does what it does. This makes it difficult to reverse engineer. Frankenstein is an aged fable with modern application.

ChatGPT is already at large. Some businesses that began by ‘experimenting’ with ChatGPT now use it, often unchecked, to save on labour costs. Yet it is a technology without regulatory guardrails, in critical need of human chaperones. The psychologist and AI researcher Gary Marcus notes that if you ask a chatbot to explain why crushed porcelain is good in breast milk, it might tell you that “it can provide the infant with the nutrients they need to help grow and develop”. Such ridiculous myths would be amusing were humans guaranteed to disbelieve them. Yet as the ‘post-truth era’ revealed, people are prone to accept deceits if they are presented with sufficient confidence.

ChatGPT delivers, in fluent and convincing prose, truths riddled with lies. Many people will enjoy mocking its output. Yet the evidence suggests that all too many more will unquestioningly believe it.