Artificial intelligence continues to develop, but humans are still thinking about it in outmoded ways.
In March of 2018, Duke Corporate Education hosted a lively conference on the role of artificial intelligence, or AI, in the future of work. At one point during a presentation, the host, BBC anchor Samantha Simmonds, remarked that, “[AI] is exciting and scary, but we’ll be OK as long as robots can’t climb stairs.” This joke makes very clear some of our many anxieties about the coming artificial intelligence age — and the role that humans will play in it.
Clunky and laughable
At one level, we can watch science fiction movies predicting that AI will end the world as we know it (think domination and overthrow). Yet at another level, the reality is that some AI developments can seem mostly clunky and somewhat laughable. There was a good example in the cheery little Sambot touring the room, who opened the proceedings by welcoming guests. Its facial recognition system designated most female participants male and about 30 years older than their real age. See what I mean? Hilarious. What support or threat can AI possibly be?
A slow start
It’s true that AI has been off to a slow start, but the pace is accelerating. We can see how automation (of customer service processes, or routine processing work) could reduce costs and increase unemployment across a range of industries, just as it already has in the car manufacturing industry. Some say that 50% of jobs could be replaced in the next ten years. The powerful conference opening session reinforced this gloomy view. Actors representing a claims adjuster (who has already lost his job to automation), a young B tech (who worries that his skills are already outdated) and a chief executive (who has used automated processes and systems to reduce costs in her medical instrument manufacturing company) posed some very real questions about the future of work when AI, as we all hope, takes over the drudgery.
Ethics
However, they also touched on one of the bigger issues: ethics. If AI is only as ethical as the programmer, how do we ensure that AI – as it becomes increasingly able to learn for itself – is a force for good, not evil? How do we harness it to increase the wellbeing of humans – rather than annihilate us?
Liquid leadership
Nick Jankel’s keynote speech reminded the audience of how much has already changed, even at this early stage in AI development. In 1937, a company could expect a 75-year lifecycle. This has reduced to 14 years by this year, 2018. By 2027, 75% of the Fortune 500 will be new names and brands. The pace of change is breathtaking and shows no signs of slowing down. New business models abound, and ownership of resources no longer guarantees a successful business. For example, Uber is the largest taxi company in the world, yet owns no taxis; and ASOS operates a similar model in the retail fashion sector. WeWork, the US company that provides shared workspaces and services, was founded in 2010 and is already valued at $20 billion. Tesla manufactures solar roof tiles that are not only visually attractive, but also cheaper than normal tiles. Phenomenal wealth is being created by entrepreneurs who grasp the new realities, from leveraging new technology to saving our planet. Of course, the ‘Fourth Industrial Revolution’ comprises many breath-taking new technologies and developments, such as Big Data, 3D-printing (a house was recently printed inside 24 hours), nanotechnology, blockchain and cryptocurrencies; but robotics, machine-learning and AI play their part. Better informed customers demand more, better and different – personalized service.
Yet, says Jankel, so far all this change is bringing more stress than reward, especially for the Millennials who will inherit this future. He reports that Millennials are stressed by climate change, resource scarcity, emergent diseases, terror campaigns and political instability – to the extent that 50% report being kept awake at night by worry. And the danger with fear is that it immobilizes us, keeps us in old patterns of thinking and behaving at just the moment when we need to be at our most creative, to grasp new opportunities.
Jankel recommends three things. The first is to recognize the business signs of industrial-age thinking. Where products are becoming commodities, it takes a huge effort to reap a marginal increase in return on investment;
HAPPINESS
The conference concluded in typical Duke CE experiential style, with participants learning in 30 minutes to sing, play and perform Pharrell Williams’ 2013 song, Happy. After all, isn’t that what all we humans really want? As for what AI may want or need, that remains to be discovered.
the business is churning rather than changing and customer complaints are on the increase. These ailments signal the need to think differently, to stop working harder and to start thinking smarter. The second is to build a workplace where people feel safe to be themselves, where they don’t feel as if they might be replaced by a cheaper robot at any moment, but where they have permission to reflect and be creative, helping to co-create a better future for the business. Rather than getting stuck in working harder at an outmoded business model, they will have the creativity to find new and more productive ways forward. This is the thesis for building and retaining the human difference, leveraging what humans do best. Being clear about the purpose of the organization – why it is in business, not just what it provides – can also help employees feel more connected to it.
And third, Jankel recommends combined thinking, rather than looking at polar opposites. Don’t believe that you must drop everything and adopt the Fourth Industrial Revolution wholesale. While some new business model companies have done just that, there is still space for many third-industrial-age business ideas. For example, there is a role for both empowerment and leadership; for top-down and bottom-up thinking; for open plan offices to ease communication and a separate office for the leader so she has time to reflect. Making opposites work together and solving dilemmas are creative acts and further reinforce the human difference.
AI, still in its infancy, can retain more read information, can process data faster, diagnose illness earlier and more accurately and is a powerful research engine. AI can handle and crunch data and draw logical conclusions. It cannot daydream, think inductively or problem-solve. Jankel’s view is that humans need to enhance these vital capabilities, rather than becoming so stressed that we lose our distinctive human difference. And organizations need to foster transparency around data and the trust to allow people to self-organize and problem-solve together, leveraging all the diversity at their disposal.
AI immersion – back to ethics
Conference participants visited three rooms to discuss the following ideas in more depth; building an organization led by purpose; which new jobs might emerge to replace existing jobs in a future AI world; and AI itself. It quickly became apparent that those working in the AI industry see the gap between humans and AI vanishing over the next 15 years. In their view, the human difference is not sustainable.
Memory bots
Let’s start with how AI is being used today. A conference participant, a partner in a law firm, explained that it can take six months in mergers & acquisitions work to research, collate and compare documents. This linear data processing task can be halved in time by using robots, which can scan documents at ten seconds a page. So robots are efficient at reducing drudgery, which is what we all want from them. The challenge is that lawyers operate an apprentice model and trainees learn their craft through such legwork, so eliminating it simultaneously undermines the whole learning system. How will lawyers need to be trained differently in the future?
Another example was recently employing a virtual receptionist in a hotel in Tokyo. Have you seen those advertising or information stands in airports, where a human shape is animated by a video of a talking human (they featured in the movie Minority Report as early as 2002)? That’s the format. The receptionist robot fared okay at the straightforward task of checking people in (although it takes it two seconds to process answers to questions, which feels like an age to a waiting human), but was hopeless at problem-solving (e.g. there’s a leak in my room).
So far, so familiar; back to robots as clunky and amusing. But Jonny Poole, senior animatronic engineer at Robots of London, which also offers Pepper, the hotel concierge, says that his firm is already working on a highly realistic face for a robot, and is convinced that they can be taught empathy. Look at Sophia, the creation from Hanson Robotics, for an example of this today.
Danny Goh, partner and commercial director of Nexus Frontier Tech, describes robots today as pre-recorded data boxes. He demonstrated an attractive, animated woman on a screen who gave very full and speedy answers to questions posed to her, but was still essentially spouting analytics.
AlphaGo (Alphabet Inc’s Google DeepMind computer, which first beat a human at the game Go in 2015) is not exactly thinking for itself, it is still programmed. All the variations of moves are known, it just thinks faster than the opponent using knowledge from thousands of brains.
Emotional recognition
Yet Danny believes that AI will have increased emotional recognition by 2030. He can see the day already when AI will perform human and emotional work, such as healthcare and education. He describes Dolly the sheep as “a step into God’s territory – we are not there yet with AI”, with the implication that we will be one day.
What worries him about this is ethics. What if AI, charged with teaching children, picks up the unconscious bias of its programmer? Deep learning models are created by researchers with their own biases. So, not only are we in danger of sharing the distinctive human difference of empathy with AI as it develops – we may also be adding unhelpful aspects of human personality, like bias. AI may become more like humans than we currently expect.
What does this mean for future leadership?
An intergenerational panel, from a newly employed Millennial to a seasoned chief executive, debated what this brave new world will need from its leaders. The Millennial wanted to be judged on her output, not on inputs or processes, as she described a culture at her workplace where being present, rather than effective, still counts. She also wanted to see a world where technology is embraced not just to cut costs, but to allow employees flexibility. The Gen X participant agreed that schools and workplaces should continue to reward knowledge – rather than the ability to research and process – and presence. She wanted to see more empowerment in the workplace, with less micromanagement, where people feel free to speak from the heart, the one measure of a human workplace. And finally, the Baby Boomer believed that leaders should guide – not dictate to – others, share decisions and recognize that people are the heart of any organization.
All in all, rather a depressing picture of the state of leadership today, offering a view that doesn’t seem to have changed much in the last 30 years. It left this author wondering how ready we are to tackle the challenges of the Fourth Industrial Revolution when we still seem to be grappling with the command and control leadership legacy of the past.
— Dr Liz Mellon is chair of the editorial board of Dialogue
An adapted version of this article appeared on the Dialogue Review website.