AI has made jaw-dropping advances. But beware of its in-built biases and limitations – and remember the value of nuanced human expertise.

Generative AI tools have burst into the public consciousness. For many of us, they have become part of our daily lives. OpenAI’s ChatGPT was reported to be the fastest-growing app ever, dwarfing the speed of adoption of most viral apps like TikTok.

The emergence of generative AI tools, which algorithmically generate new content by seamlessly aggregating, synthesizing and (re)presenting information in the desired tailored form, has underlined that we are, as yet, only scratching the surface of digital transformation processes. They will profoundly impact, not just businesses and our professional endeavors, but societal interactions and humanity as a whole.

It is widely accepted that generative AI has the potential to significantly improve many aspects of our lives. But it also creates enormous uncertainty due to the possibility of negative, even catastrophic, impacts on humanity. If we were to look for a comparison from history, the uncertainties of the still-nascent AI revolution are reminiscent of the gravity of uncertainties involved in the use (or misuse) of nuclear technology.

AI is poised to have critical and long-term impacts on our world – but these impacts are no longer distant or hypothetical. The unprecedented speed of adoption of generative AI means that uncertainty is here, and it is now. Leaders urgently need to assess both the opportunities and the threats created by the deployment of these powerful tools.

Deploying generative AI

In the first instance, business has an opportunity to relegate a variety of mundane tasks to generative AI and other algorithmic tools. For example, AI has been used to draft tailored, personalized offers to customers. CarMax, the largest used-car retailer in the US, has reportedly been using AI to generate personalized listings for sellers, and utilizing OpenAI’s technology to aggregate relevant reviews and provide personalized recommendations to prospective buyers. Zillow, a real-estate platform, was providing algorithmically-generated binding offers to sellers as part of its house-flipping operation. However, the firm has since closed that operation, following heavy losses: it seems the accuracy of these valuations was insufficient to ensure the viability of the business model. Zillow’s example is a cautionary tale about the risks of over-reliance on algorithmically-generated valuations.

Other examples are widespread. Chatbots are increasingly being used to handle mundane customer service tasks, slowly but surely making armies of customer-service agents obsolete: only if the issue mandates escalation does a human decision-maker enter the picture. In fact, most of us have relied on virtual assistants for some time. Apple’s Siri was launched back in 2010, and Amazon’s Alexa back in 2014. Today, the emerging generative AI tools massively enhance the usability of AI personal assistance, allowing it to go far beyond the well-established tasks of managing your personal schedule. Such tools can easily handle significant communication tasks, such as classifying the importance of communications and automatically generating and deploying personalized responses and messages. These AI-generated messages could range from personalized performance-review videos from the chief executive, to hundreds or thousands of non-direct reports, to a personalized video message from a brand ambassador to targeted potential customers.

Another straightforward use case which gives a perspective on the usefulness and limitations of generative AI is in software development. Generative AI can readily provide blocks of code for executing well-defined known tasks, allowing developers to focus on creative aspects of developing software – which is important, because generative AI cannot reliably develop software for a novel use.

This is not unlike the breakthrough created by spreadsheet software four decades ago. It allowed decision-makers to organize and analyze large amounts of data with unprecedented ease, but it could not provide connections between digits in the spreadsheet and the real-life implications of decisions based on those numbers. Similarly, generative AI cannot provide solutions or answers for anything but what is already available, recorded, known, and/or solved in the datasets it has access to.

The risks of over-reliance

This underscores a principal shortcoming of (over)relying on generative AI. As an all-purpose tool, the outputs it generates are necessarily based on generic, context-independent rules. It is frequently impressive on topics that a user has no knowledge of, but appears disappointingly shallow – and often incorrect – when the user has even a shred of expertise or experience.

To verify this, experiment with one of the available tools. Ask its guidance on a topic from your area of expertise – ideally one that might not have commonly-known or obvious insights. Read what it produces carefully: an automated synthesis of the available (mis)information, with limited (or no) verification or quality assessment, is the prototypical setting for the ‘wisdom of crowds’ to fail. Generative AI can provide common wisdom on any topic in a palatable format, but it would be a fool’s errand to use it to replace one’s critical thinking or, worse, drive decisions.

Users also need to be aware that generative AI amplifies algorithmic bias by design. Its very purpose – creating content on demand – precludes the option of not creating content when there is uncertainty about the optimal output. Hence, if so prompted, generative AI tools will take a binary position, determined by how the available information is aggregated, regardless of nuance. This further propagates the wisdom of (non-expert) crowds, based on prevailing (mis)information, and accentuates one of the critical pain-points and deficiencies of the digital revolution: the difficulty of determining the relevance, veracity or quality of available information. Just because content can be straightforwardly produced en masse does not mean the user can be confident that it is reliable.

Generative AI is least reliable, and brings least value, precisely when decisions or actions hinge on information which is not clear-cut.

The value of leadership

If one’s value is based on an ability to complete mundane tasks, the technological progress currently exemplified by generative AI will probably bring that value to zero in the near future. On the other hand, the ability to make hard decisions and take actions that go against common wisdom – one of the distinguishing features of great leaders – is as important as ever. The ability to think critically, to parse relevant from irrelevant, to spot the importance of nuanced details, and so on, are even more valuable in a world in which humans are no match for machines’ ability to process vast amounts of data.

Alarmist perspectives on the catastrophic consequences of AI taking over might be premature, provided we respect the value of expertise and critical thinking. These are the advantages that humans still have over one-size-fits-all (mis)information aggregation algorithms.