Generative AI can be friend not foe – with the right leadership.

Like many leaders, we like to cook. An hour or two in the kitchen making something inventive or delicious is the perfect diversion from the travails of professional life. Cookery, like all great hobbies, is a rewarding test of skill. Now imagine that home cookery was rendered obsolete. No longer would your family or friends benefit from your passion. Robots would cook instead.

No human can be as precise, as knowledgeable, or as efficient as a mechanized cook. Just as computers years ago eclipsed chess grandmasters; artificial intelligence threatens even professional chefs. From the passionate home cook to the most decorated gastronome in Paris, imagine that robots – who can neither taste nor smell – will make human cookery redundant. Suddenly, many leaders’ favorite pastime becomes pointless: like maintaining a lawn with hand shears when a powered lawnmower does a better job in the fraction of the time.

We have glimpsed that future. Robot cooks might not yet be Michelin-starred, but they exist. In an eye-popping video, Moley Robotics shows how a kitchen can become automated. It maps and models your kitchen. You can teach it your family favorite recipes. The system is imperfect – it still requires food to be prepped by hand – but the manufacturer plans to overcome this by having food kits sent to households. If a machine can store and interpret recipes, procure ingredients, and cook dinners, what role does a human cook have? Any sense of purpose or reward we gain from our beloved hobby feels under threat.

From threat to opportunity

From kitchens, to offices and professional workplaces, people are worried. The advent of generative AI – large language models such as ChatGPT – threaten not to aid human endeavor, but to replace it. Despite optimistic rhetoric that generative AIs will soon be companions to humans, many early use-cases reveal them to be competitors, contesting – often winning – the same territory.

Can any optimism about the AI age be justified? Yes. With the right leadership models, generative AI can be a powerful sous-chef, leaving humans to exercise their creative advantage. A recent Duke Corporate Education conference sponsored by Mastercard and Fortune saw experts in generative AI, leadership, and human resources, convene to propose leadership tools that will ready companies for a harmonious and productive relationship between people and machines. Six key lessons emerged.

The fear complex

  • Overcome fear with unambiguous reassurance
  • Profile your people and tailor your outreach accordingly
  • Embrace ‘human in the loop’ model

Be aware: your people are probably scared of AI. The rapid ascent of generative technologies has left many employees feeling threatened. Like home cooks, many fear a loss of purpose to their work. Others fear for their entire jobs.

The first role of a leader is to erase this sense of fear, turn a climate of threat into a climate of opportunity. How can this be done? Via clear and unambiguous reassurance. “Start with saying, ‘nobody is going to lose their job because AI is coming into the frame’,” a senior figure at the event said. “But what we do expect people to do is get out there and learn, understand what the AI’s capabilities are and how to interact with it.”

Delegates advocated the concept of ‘human in the loop’ – an employee-first strategy whereby employees are the masters of generative AI, rather than vice versa. AI is the adviser; humans are executives. “AI does not make the decisions,” the senior figure said. “It gives folks the insights which they need to make better decisions.”

Attitudes towards generative AI among employees, among leaders, among customers populate a scale with two poles: Fomo and Fogi.

Fomos – those with Fear of Missing Out – are the urgent enthusiasts, the early adopters, those who want to dive into the technology in the hope of gaining early mover advantage.

Fogis have a Fear of Getting Involved. They are more skeptical. They worry that early AI use cases could be wastes of time and/or money, and would rather let others test the technology in real-world scenarios before they decide which systems to adopt. Profile stakeholders along the Fomo-Fogi scale, and tailor your outreach efforts accordingly, encouraging prudence from Fomos and broad-mindedness from Fogis. The wise approach to AI exists between the two extremes: be cautious of the threat, but open to the opportunity.

The skills mandate

  • Tech can be dangerous without training
  • Teams must lead experimentation and learning
  • Learning fosters acceptance and approval

A little knowledge is a dangerous thing. How do we convey to teams how best to utilize the superpower of generative AI? At the event, one senior guest used a powerful analogy. Imagine a gang of builders is constructing a house, he said. The time comes to install the roof. The builders are offered a choice. They can use a traditional hammer or a power tool. Which do they choose? “If you say to a roofer, ‘you can use a nail gun or a hammer’, they are going to take the nail gun,” the guest at the private event said. “But, if they don’t have training in it, they are going to do harm to themselves and other people.”

Reducing the skills deficit through training and controlled experimentation both enhances the potential power of teams, and helps win hearts and minds that AI is a powerful tool that can improve their performance and make their working life better, more creative, and less routine.

But employees must be integral to the innovation process. Small teams should be devolved to experiment in-depth with generative AI. Let your people lead that experimentation.

The feedback paradox

  • Generative AI is increasingly able to give feedback to employees
  • Teams likely to direct response to feedback to human bosses
  • Leaders remain ultimately responsible for employee engagement and performance

Generative AI is removing some of the tasks managers and leaders like the least. One of the key roles of a leader is giving feedback to their teams. Yet this comes up frequently in surveys as a responsibility many would rather be without. Can AI assess and give feedback to your people? To some extent this is already possible. Moreover, the advance of the technology means the prospect of having robots involved in employee relations grows closer every day.

Yet, what happens when employees dislike or disagree with the feedback that an AI gives them? Do they respond to the AI – or to you, their boss? The answer is that most people will respond only to a human leader. While AI will be theoretically capable of performance management, human leaders remain responsible for it. This paradox is repeated in several organizational competencies: while tasks can be devolved to generative AI, the responsibility for their outcome remains with people.

The values framework

  • Generative AI activity must adhere to and amplify your organization’s values
  • Frank assessments of the real costs and benefits of AIs are crucial
  • Test your AI strategy with the Three Rs model

Every organization needs a generative AI framework that amplifies its values as a firm. Review your business’s stated objectives regarding its relationship with customers, communities and operating environment. Now assess how generative AI can contribute to those goals. Experts at the private event proposed a triple-pronged test: the Three Rs.

The Three Rs

Responsibility

  • How does your AI activity align with your company’s ethical values framework?
  • Where is the reporting and checking framework that ensures that your AI activity is being scrutinized?

Reliability

  • How much inaccuracy can your company tolerate in its use of AI?
  • How does this differ between internal activities and customer-facing activities

ROI

  • What is the real cost and benefit of investing in an AI technology for your company?
  • To what extent will the recruitment of quality-assurance professionals offset any efficiency savings delivered by the AI?

Applying this test and adjusting activity based on the outcome helps organizations shift their use of generative AI from high-risk to low-risk, iterating their AI strategy accordingly as new AI activities enter the business.

The bias effect

  • Generative AI embeds prejudice in data analytics
  • It can be harder to identify bias in machines
  • than humans
  • Company-wide appeals to eliminate AI bias bring multiple benefits

Generative AI is biased because humans are biased. Generative AI large language models (LLMs) curate human-generated content, thus embedding human prejudices in their data analytics. Yet there is a risk that we are less likely to identify bias in AI content than we are in the work of human colleagues. Moreover, it can be difficult to adjust AI content for bias even where we are able to identify it.

The key to overcoming the in-built bias in generative AI modeling is to make everyone aware that the technology is imperfect, prejudiced. Ask employees – many of whom will be closer to the coalface than leaders – to identify and report biased AI output when they encounter it. This approach has a double benefit: exorcizing bias from a company’s data analysis, while enhancing
employee engagement.

“If you’re an employee who can say, ‘I helped my company make a policy change that reduced bias,’ you’re going to be a very motivated employee,” said one senior professional at the private event. “You will become somebody who is an advocate for this in the organization – an influencer.”

The differentiation challenge

  • Companies face homogenization due to using similar AI tools
  • Forward-thinking organizations develop machine-learning systems
  • Competitive advantage can be gained through exclusive AI curation

Under generative AI, companies face homogenization. If all companies use the same or similar tools – such as ChatGPT – how do they offer significant differentiations to their customers? How do they command a competitive advantage? Only a handful of publicly available LLMs exist, so there is little opportunity for differentiation if companies rely greatly on mass-market technologies.

Wise firms are developing their own AIs. These bespoke, propriety systems are fed with data and content compiled by the business itself. This knowledge provides the basis for potent machine-learning based upon the collective intelligence accrued by companies. It has the potential to create a cutting-edge for firms by differentiating their offer from those of their competitors.

The generative ally

Remember our kitchen? The optimistic future for human cooks is that AI will not eclipse them. It will free them up to become better, more creative, chefs, with AIs as cherished kitchenhands. Catering for dinner parties will be easier and more fun. Home menus will be innovative and interesting.

In business, as in the kitchen, generative AI can be a powerful companion. With the right leadership techniques, it can become a supporter, not a competitor.

Yet for that outcome to become reality, leaders must act now. Fear must be overcome. Bias must be comprehensively rooted out. Humans must be empowered to exercise the qualities that make them uniquely human.
Leadership for the AI age must focus not on machines, but on people.

Sharmla Chetty is chief executive of Duke Corporate Education. Vishal Patel is president of global markets at Duke Corporate Education.