Human bias has infiltrated AI, says Vivek Wadhwa.

Enough nonsense. AI isn’t biased. The human world is. AI is just a fast computer that sees the patterns we teach it. It It’s bigger and stronger, but basically an Excel spreadsheet filled with data created by Homo sapiens.

And therein lies the problem. Humans are biased beings. Our prejudices are concern enough when they are limited to individuals. When they are compressed, distilled and disseminated through ultra-powerful computer networks, they become dystopian.

Much of the bias amounts to little more than confirming stereotypes via low-level generalizations: because most builders are men and most secretaries women, all the builders on Google Images’ first page are male, and all the secretaries, female. But you need not look much further to discover something altogether more sinister.

AI learns by encoding patterns from the data it feeds on. If you build an AI system to identify who is at risk of becoming a criminal in the US, the AI can operate only on the data it has. Since the proportion of black people in prison is higher than the proportion of white people in prison, most systems will infer that being black makes you more likely to commit a crime.

These unthinking machines lack the wherewithal to account for the deep-set societal biases in the US that lead to blacks’ relatively higher incarceration rates. Rather than using mathematics and statistics to challenge prejudice, they amplify it. Recently, US senators Ron Wyden and Cory Booker and congresswoman Yvette Clarke introduced the Algorithmic Accountability Act. It is a response to lawmakers’ increasing concern that AI is magnifying human bias in the tools that, more and more, shape our lives.

Yet despite the goodwill, change is going to be hard to come by. AI remains restricted by the data available. Human programmers are yet to crack the puzzle of how to control their own biases.

Let’s return to our criminals. Imagine we removed racial data completely. Would this solve the problem? Not a chance. Other demographic data, such as income, district of residence, single-parentage and standard of education – in the US at least – correlate so strongly with race they become a proxy for it. Machine learning isn’t at the stage where it can neutralize linkages between metrics. With their ability to deploy qualitative measures and exercise judgement, it turns out that professional, educated humans do a better job of making assessments than algorithmic tools.

This is not to say that AI has no role in public policy. In healthcare, its ability to predict risk is a godsend: a tumour is a tumour whether it is in the body of an Asian or an African-American. The databases they use are updatable, if and when interpretations of the data are overhauled.

Yet we risk committing too many sensitive, society-defining decisions to machines. AI presents a great risk by determining decisions that exert huge influence over people’s lives. When a computer decides whether you get a mortgage; whether your child is admitted to college; or whether a youngster should be separated from his family because of risk of abuse; we have abandoned our responsibilities as humans. Moreover, we have failed to recognize and act upon our human advantage.

The failures of AI continue. Streaming services such as YouTube use algorithms to boost user engagement by promoting the most engaging content. Yet human judgement is again conspicuous in its absence – any qualitative assessment on whether promoted content is good for society is at best a flimsy afterthought.

The growing recognition that human biases are being exacerbated by systemization has done little to prevent the march of AI. Instead, this convenient, prejudiced, sociopath beats on, while human judgement is relegated to the past.

An adapted version of this article appeared on the Dialogue Review website.