As the role of AI in business grows ever larger, C-suite leaders need a clearer understanding of the opportunities and risks it presents.
Artificial intelligence (AI) is increasingly prevalent in our daily and professional lives, whether we are aware of it or not. In the early days of its implementation, AI was used by scientists, academics and professionals in varied projects with results that have been eye-opening, revealing, counterintuitive and, in some cases, controversial. Leaders will be increasingly familiar with the basics: that AI algorithms work by utilizing mathematical and statistical techniques to process and analyze large amounts of data to make predictions, recognize patterns or perform specific tasks. The specific workings of those algorithms can vary depending on the type of algorithm and the problem it is designed to solve, and the parameters entered (often by humans, who are not driven by algorithmic frameworks).
But understanding the basics is not enough. AI clearly presents many significant possibilities. Given the technology’s current stage of development, how should the C-suite be thinking about AI, from the perspective of both opportunities and risks?
The C-suite and AI
As C-suite leaders begin to discuss AI more often, and in more detail, one of the first questions that arises is how they can effectively explore a highly complex set of factors. I increasingly hear executives asking how can they best assess AI in relation to the company’s business model, its customers’ behaviors, the supply chain, brand image, regulatory agencies and employees. In short, the question is: what should be on the executive team’s radar when it comes to AI – its pluses and minuses? Often, the starting point is to develop a clearer understanding of the limitations of AI. Executives need to assess multiple areas that can present challenges as AI is deployed.
Bias and discrimination
AI systems can be susceptible to biases present in the data they were trained on. If the training data reflects societal biases or unfair practices, the AI system may inadvertently perpetuate or amplify them, leading to discriminatory outcomes.
Lack of transparency
Some AI models, such as deep neural networks, can be difficult to interpret or explain. This lack of transparency can raise concerns regarding accountability, as it becomes difficult to understand how AI systems arrive at their decisions or recommendations.
Limited generalization
AI models are typically trained on specific datasets and may struggle to generalize well when faced by new or unexpected situations. They can perform excellently within their training domain, but may struggle when encountering unfamiliar scenarios.
Vulnerability to adversarial attacks
AI systems can be vulnerable to deliberate manipulations known as adversarial attacks. By introducing carefully crafted inputs or perturbations, attackers can deceive AI systems into making incorrect predictions or decisions.
Data limitations
AI models rely heavily on high-quality and representative data for training. If the data used is incomplete, unrepresentative or contains errors, the AI system’s performance may suffer, leading to inaccurate or biased results.
Ethical concerns
The use of AI raises ethical considerations in various domains, such as privacy, surveillance, job displacement and autonomous decision-making. There can be disagreements and challenges in defining appropriate guidelines and regulations to ensure ethical AI deployment.
Lack of common sense and contextual understanding
While AI models excel in specific tasks, they often lack human-like common sense reasoning and deep contextual understanding. This can lead to errors or misinterpretations in situations where background knowledge or context is crucial.
AI myths and realities
With a clearer picture of the reality of AI’s capabilities and limitations, executives can begin to interrogate some of the common myths that currently surround the technology. One of the main claims made by AI enthusiasts is that the technology will enable the automation of tasks that have traditionally been done by skilled professionals. It is similar to the claims made 20 years ago that industrial robots would eliminate lower-skilled jobs. Yet robotics did not wipe out vast numbers of jobs; rather, it created new jobs requiring a different set of skills.
Another assumption, that is at least partially myth, is that AI will use customer purchase data to drive better targeting, control inventory levels and project future sales. Yes, analysis of customer data can reveal patterns in behavioral spending and purchasing activities for specific items. But the application of this data could be seen as detrimental by customers. For example, if you recently made an online purchase of a blender for the kitchen, you may find yourself pelted with ads for blenders for weeks to come. But you already have one: why do these companies think you need another? Senior executives need to be mindful of how AI, machine learning and big data are applied.
Another myth is that AI will eliminate human error. AI can be used for predictive outputs, given a stream of input – but the quality of the output is directly proportional to the quality of the inputs. The old truism has it right: garbage in, garbage out. As the volume of data increases and more people provide data points, will the quality improve or deteriorate?
That raises the possibility of legal ramifications, reputational damage and customer loss when generative AI gets something wrong. AI models can be sensitive to biases present in the data they were trained on, and inadvertently reflect or amplify such biases. Determining legal responsibility in cases where AI generates errors is an evolving area of law. The potential complexity is enormous, depending on factors such as the jurisdiction, the nature of the error, the specific AI system involved and the context in which the information is used. Senior executives must be aware that the responsibility for AI errors can be attributed to different parties involved in the development, deployment and use of the system.
AI has also been hailed as a great democratizer of data, information and knowledge – but this is surely another myth. As organizations in different locations develop more comprehensive datasets, will some countries or cities with large highly-educated populations become smarter than others? Would a university town with a smaller sample size yield different results? To make any claim that AI democratizes knowledge would require complete and comparative information from its application in different contexts.
Another common myth is that AI-based knowledge is more likely to be correct than human knowledge. This rests on the broad perception that AI uses extensive and accurate data sets. In today’s world, you can go to a doctor and receive a diagnosis, then seek out a second opinion, which may turn out not to be fully aligned with the first. It has been claimed that ChatGPT can diagnose diseases better than your doctor (indeed, a study in February 2023 found that it could pass the US Medical Licensing Exam). But isn’t that claim relative to who your doctor is? What AI can certainly do is provide better information and insight to the doctor for them to make more holistic diagnosis and treatments. AI can process vast amounts of medical records and contrast them against the medical literature to identify relevant patterns, risks factors and detect abnormalities.
But, does it know all the questions to ask, when all the data-heavy evidence has been exhausted? AI is not more likely to be correct because it interrogates vast amounts of data; AI may be correct if it has interrogated accurate data and correlated it to generate cogent conclusions about the questions it asked.
When AI fails, who is to blame?
The question of who is to blame when AI systems fail is one that C-suite leaders need to explore particularly carefully. A wide array of parties could be considered contributors to any given failure, including the organizations or individuals who design, develop and produce the AI system, users who may have operated or used it negligently or improperly, and entities or individuals who provided data that proved to be faulty or biased. Some responsibility may also fall on regulators and policymakers for inadequate preventative regulations or oversight.
The underlying challenge for executives is to provide context, clarity and consistency in how AI is deployed within the organization, and to provide oversight on the use and outcomes of generative information. Whether it relates to the quality of customer relationships, identifying ways to improve products, or strategic questions such as capital expenditures, production levels, lending rates and so on, AI can be used to provide a wide array of information when senior executives make significant decisions or changes. That information enables executives to provide the rationale and context behind those choices: it necessarily falls to executives to explain the factors considered, the trade-offs involved, and the anticipated impact on the organization, employees, customers and stakeholders. By providing context, senior executives ensure that employees have a clear understanding of the organization’s direction, purpose and performance. This context helps alignment, encourages informed decision-making and fosters a sense of purpose and engagement.
The opportunity for AI
Senior leaders need to consider seven critical dimensions when they evaluate the potential role of AI in their business.
1 Potential benefits
Senior executives should understand the potential benefits of AI to their organization. This includes improved operational efficiency, enhanced decision-making through data analysis, automation of repetitive tasks, enhanced customer experiences and the ability to identify new business opportunities.
2 Strategic alignment
Executives should align AI with their organization’s strategic goals and objectives by assessing how AI might alter the strategic direction, or the pace at which the organization moves toward its objectives. They must identify where AI can have the most significant impact and prioritize resources accordingly. Developing a clear AI strategy and roadmap can guide decision-making and resource allocation.
3 Data readiness
AI relies heavily on high-quality and relevant data. Executives should assess their organization’s data readiness, including data availability, quality and accessibility. They should ensure proper data governance practices, data privacy considerations, and the availability of necessary infrastructure to support AI initiatives.
4 Talent and skills
Executives should assess the skills and talent required to implement and manage AI initiatives effectively. This includes understanding the roles of data scientists, AI engineers and other specialists. Identifying skill gaps and investing in training or hiring efforts can help build the necessary expertise.
5 Ethical considerations
AI raises complex ethical questions around issues such as data privacy, algorithmic bias, transparency and accountability. Executives should be aware of these ethical concerns and ensure that their organization follows ethical governance guidelines and practices. They should consider the potential impact of AI on employees, customers and society as a whole, and have AI embedded in their ESG strategies.
6 Change management
AI implementation often involves organizational changes. Executives should anticipate potential resistance to change, address concerns and communicate the benefits of AI to employees. Engaging employees in the process, providing training and support, and fostering a culture of innovation and learning can facilitate successful AI adoption.
7 Continuous learning and adaptation
AI technologies evolve rapidly and senior executives should stay informed about the latest developments, trends and best practices in AI. Regularly evaluating the impact and outcomes of AI initiatives, monitoring key metrics, and making necessary adjustments or refinements are crucial for continued success.
Every new automation technology replaces some tasks within society, while simultaneously creating new opportunities and jobs requiring new skills. The opportunity for senior executives today lies in how AI can be employed to rethink the business model, realign business processes, restructure the organization, re-educate people and redeploy the organization’s human capital. If you are looking at AI simply as automation to reduce the human capital cost base, you may already be suffering from analytical bias. You should be asking instead: “How can I make my organization a more productive place to work using AI to augment today’s competitive business realities?”