Tesla’s supremo should put his money where his mouth is.
Elon Musk has the ear of the world. Never shy with a prediction, Musk is a soothsayer. He’s the man who can forecast – and make – the future. Or is he? For all his wonderful technology successes, the Tesla boss has a habit of being wrong. Much of his bluster crumbles in the face of reality. “Musk says it, and so it shall come to pass.” But, actually, it often doesn’t.
The self-driving taxis he revealed would be carrying us all to restaurants have yet to reach their first course. This spring, a summoned ‘self-driving’ Tesla crashed into a $3 million private jet that was stationed at an almost empty airport.
The quantum leap in artificial intelligence promised by Musk simply hasn’t materialized. And this matters – because Musk himself has scared the world with his chilling claim that “with artificial intelligence, we are summoning the demon”.
There is a chasm between the AI we have – which is chiefly pattern recognition – and the kind of Star Trek-style AI that remains the artefact of Musk’s fantasies. Instead, AIs repeat crass stereotypes, propagate misinformation, and fail even basic human tasks – like driving around deserted aerodromes.
Building an AI that we can trust with our lives is science’s greatest challenge. With his Panglossian assessments, Musk is being deeply unhelpful. He is misleading the world about the sheer scale of the task ahead. That’s why I have asked him to put his money where his mouth is. When my colleague Gary Marcus drafted a $100,000 bet that AI would fail to pass a series of tests by 2029, I matched it, increasing the stakes to $200,000.
Since then, several more tech experts have thrown their hats into the ring. At the time of writing, the even-money bet to Musk stands at $500,000. This is pocket change to the multibillionaire. Yet he has so far failed to accept our wager. What’s stopping him? Perhaps it is the nature of the tests, which are critical examinations of intelligence rather than narrow learning such as playing Go or chess. To win the bet for Musk, AI must be able to master three of these five:
- Watch films and tell us accurately what is going on. Who are the characters? What are their conflicts and motivations?
- Read novels and reliably answer questions about plot, character, conflicts and motivations. Go beyond the literal text and show an understanding of the material.
- Work as a competent cook in any given kitchen.
- Reliably construct bug-free code of more than 10,000 lines from natural language specification, or by interactions with a non-expert user.
- Take arbitrary proofs from mathematical literature, written in natural language, and convert them into a symbolic form suitable for symbolic verification.
What is striking about the tests is that they are all unremarkable in human terms. Most Dialogue readers could manage at least three without thinking too much. Yet can today’s AIs pass Apple co-founder Steve Wozniak’s famous test: could they visit your house and make a cup of coffee? Maybe Musk thinks they can. Maybe he can prime a robot to grind the beans, boil water, and fill the cups.
If he has such confidence that robots can perform such perfunctory tasks, he should take our bet. It’s easy money that will help pay for repairs on that private jet.