Inclusivity must reach beyond flesh and blood
Bionic bigotry harms us all, writes Michael Chavez.
Are you prejudiced towards your own race? Some 99% of Dialogue readers would answer ‘no’ to that question. They might resent even being asked. Yet the true answer is ‘yes’. Human beings – almost all of us – are biased towards the human race.
How do we view robots? Most of us are prejudiced against them. A report from the Centre for the Governance of AI suggests that 82% of Americans are concerned about where AI will lead. EU surveys reveal similar misgivings. As the Fourth Industrial Revolution unfolds – with its promises of unfathomable computational capability – business and social observers are highlighting some very old, human fears, such as loss of jobs and status, a widening of the wealth gap and social dislocation.
Yet, like all prejudices, the discomfort is irrational. I recently met the Great Mind that is Dr. David Hanson, founder of Hanson Robotics Limited. Our conversations persuaded me that not only can robots make us more human, they have the potential to make us increasingly purpose-oriented and creative. “AI and robotics are a way to enhance the quality of life for a more diverse group of people,” said Hanson, who offers a unique perspective as one of the creators of Sophia the Robot, the world’s most human android. “We want to inspire people to create a human bridge to our AI so we can all evolve.”
How might this work? As robots take over rote, ‘algorithmic’ tasks, humans must specialize more on those capabilities that are ‘robot-proof’ – such as creativity, purpose, imagination and curiosity-based problem exploration. However, as we focus on these uniquely human capabilities and traits, we will do so through ever-increasing levels of interaction with robots and AI.
Yet the evidence thus far is that humans don’t much like being around robots. We can live happily enough with Siri and Alexa, because they are sympathetic voices that emanate from innocuous black boxes. But talking to Sophia is a quite different experience: interacting first-hand with her is thrilling, but jarring. A humanoid robot infringes upon our identities precisely because of their humanlike look and actions. Yet, according to Hanson, this is exactly the point: we as humans must learn to interact with humanoid characters as a new form of inclusion.
Robots challenge our built-in biases. Through inventions like Sophia, Hanson is bringing these prejudices to the surface, flushing them out. This catharsis is critical to any organization that will use robots in the coming years. The more robots begin to resemble humans, the more we have to consider them in our inclusive thinking. Although we cannot – at least yet – harm robots by excluding them, Hanson asserts that we are harming ourselves by doing so. Limiting our interactions with AI restricts our human capabilities.
We can counteract our modern-day Luddism with better tools and policies. We have leadership practices that force us to think about building team collaboration with those who are different to us, be they man, woman or machine. Through his work, Hanson is forcing us to think about how we might incorporate robots into our companies.
If this sounds like a work of science fiction, it soon won’t. Great Minds like Hanson are forcing us to start taking inclusivity for robots seriously. If humans continue to set themselves against machines, we will limit our own power. And today’s backlash against AI will seem like a mere flesh wound compared to what’s to come.
— Michael Chavez is global managing director of Duke Corporate Education. A version of this column originally appeared on Forbes.com.