AI holds promise for finance teams, but success depends on identifying the right use cases rather than buying into the hype.
Some technologies have unusual – even bizarre – paths to market success. Wilhelm Röntgen’s discovery of the X-ray, for example, resulted in the creation of a device to serve a surprising market niche: the fluoroscope, otherwise known as the pedoscope. These machines were used by shoe stores for measuring foot size for several decades, until they were eventually banned in the second half of the 20th century.
Aside from the obvious problem of exposing customers to radiation, the fluoroscope illustrates a failure to ask what problem the technology was meant to solve. Clearly, tape measures were – and still are – an adequate solution for measuring feet. We all now accept that X-rays are far better deployed in healthcare. For shoe shops, this investment in expensive technology was completely unnecessary.
Something similar may be happening today as AI is tested in a wide variety of speculative use cases. It risks becoming a solution in search of a problem. The danger for finance teams, as highlighted in a recent IDC InfoBrief sponsored by Unit4, The Path to AI Everywhere: Exploring the Human Challenges, is that most AI proof-of-concept projects do not make it into successful production.
As finance leaders consider where to deploy AI, they must avoid falling for the hype about what it might deliver – and instead spend time properly evaluating AI’s best use case, based on the real-world problems it can solve for their finance operations.
AI, meet finance
Advocates for AI in finance believe the technology can help address multiple priorities: automation, prediction and innovation. It is already becoming more visible in back-office processes, sometimes without users even realizing it. Teams are exploring its use in areas such as anti-fraud measures, anomaly detection, interactive service chatbots, hands-free automated form-filling for expenses management, invoice processing, and so on.
However, in this most regulated of business functions, special care is required. Critics might argue that this detailed evaluation is a displacement activity – a thin veil for what is more honestly described as risk aversion. But finance professionals should counter that they have a long tradition of embracing innovation – it’s just that they have a healthy skepticism of vendor’s promises about technology-enabled silver bullets.
As a result, CFOs are increasingly working closely with CIOs to determine where AI can deliver a tangible return on investment, and what technology and skills investments will be required of their businesses. Many grey areas demand further investigation. For example, how do we know that AI can be trusted in data-handling processes? Will customer records be scraped and shared in ways that could expose personal data?
Even seemingly low-risk practical applications may carry hidden risks. Perhaps a CFO wants to communicate key messages to the organization, but feels uncertain about finding the right language to get their point across. They use a generative AI service to assemble their thoughts into clear prose, looking to save time and enhance readability. But what if the AI ‘hallucinates’ or misinterprets the core text? Inaccuracies could slip through. Of course, best practice dictates that a human gatekeeper should review the output – but at that point, how much of the automation benefit has been lost?
Another challenge: predictive finances. While organizations aspire to forensically predict financial results, AI is still far from being able to accommodate the manifold factors that influence a balance sheet. Yes, it can warn of potential risks, and is highly effective in predictable, narrow applications. However, it cannot foresee Black Swan events – think of how unprepared businesses were for Covid-19 lockdowns – or reliably uncover hidden insights in complex data sets.
Building AI readiness
The fact is that for many far-reaching potential applications of AI, significant risks and unknown factors remain. These uncertainties will persist until society, legal systems and practical experiences establish clearer guidelines for AI deployment. Many organizations will find it suits them to strike a balance between early adoption to gain a competitive edge, and waiting until the AI landscape is more mature. However, waiting does not mean inaction. Companies should be going all out to improve data quality and integration foundations, so they are ready when bolder AI maneuvers are viable.
They should also be looking for the low-hanging fruits of automating chores. This means recognizing an essential truth: new technologies should not just be fun playthings, but must address actual and anticipated business frustrations. Organizations should start by identifying their most pressing frustrations and then explore how AI (and other tools) can provide targeted and safe solutions.
There’s a reason why Time magazine named the fluoroscope one of the 100 worst ideas of the 20th century. There are many good use cases for AI in finance, but some bad ones too. Leaders need to be clear they understand the difference.