“A computer cannot be held responsible, so a computer should never make a managerial decision.”
– IBM Training Manual, 1979
The adoption of Artificial intelligence (AI) is increasing. As per the IBM Global AI Adoption Index 2023, 42% of organizations have actively implemented AI, while 40% are experimenting with it. Among those utilizing or exploring AI, 59% have heightened their investments and rollouts in the last two years. Consequently, there is a surge in AI-based decision-making that utilizes smart tools to arrive at (presumably) precise solutions.
The rapid adoption leads to a crucial query: Who is accountable if AI makes an erroneous decision? Should the blame be on IT teams, executives, AI model developers, or device manufacturers?
In this article, we will delve into the emerging realm of AI and reconsider the earlier quote in light of current scenarios: Is there still a need for a human in the decision-making process, or can AI take charge?
Achieving accuracy: Where AI is enhancing business results
Guy Pearce, a primary advisor at DEGI and member of the ISACA operational trends group, has been engaging with AI for over three decades. “Initially, it was emblematic,” he remarks, “and now it is statistical. It comprises algorithms and frameworks that facilitate data processing and enhance business efficacy over time.”
Data from IBM’s recent AI in Action report illustrates the impact of this transformation. Two-thirds of leaders report that AI has spurred a growth rate in revenue of more than 25%, and 72% mention that there is complete alignment between the C-suite and IT leadership regarding the forthcoming stages in the journey to AI refinement.
With growing confidence in AI, enterprises are integrating intelligent resources to enhance business outcomes. For instance, financial management firm Consult Venture Partners rolled out AIda AI, a conversational digital AI assistant that employs IBM watsonx technology to respond to potential clients’ queries without human intervention.
The outcomes are clear: AIda AI resolved 92% of inquiries accurately, 47% of them led to webinar registrations, and 39% transformed into leads.
Falling short: Consequences of AI errors
A 92% success rate for AIda AI is remarkable. However, it still erred 8% of the time. So, what occurs when AI makes mistakes?
According to Pearce, it is determined by the consequences at hand.
He provides the example of a financial institution employing AI to assess credit scores and grant loans. The results of these determinations involve relatively low stakes. In the best scenario, AI sanctions loans that are repaid promptly and completely. In the worst case, borrowers default, necessitating companies to resort to legal measures. Although inconvenient, the negative results are outweighed by the potential benefits.
“Concerning high stakes,” Pearce notes, “consider the healthcare sector. Let’s assume we utilize AI to tackle the issue of wait times. Do we possess adequate data to ensure patients are seen in the correct order? What if there is a mistake? The repercussions could be fatal.”
Hence, the application of AI in decision-making heavily hinges on the domain of decision-making and the repercussions of these decisions on both the decision-maker company and those impacted by the decisions made.
In certain instances, even the utmost unwanted outcome is a minor nuisance. In contrary scenarios, the consequences could be detrimental.
Shouldering the responsibility: Who is answerable for erroneous AI decisions?
In April 2024, a Tesla in “full self-driving” mode collided with and fatally injured a motorcyclist. The operator of the vehicle confessed to using their phone before the accident, despite the necessity for active driver monitoring.
So, who is held accountable? The driver is the evident choice and was detained on charges of vehicular manslaughter.
However, accountability may not solely lie there. There could also be a case where Tesla bears some culpability since the company’s AI algorithm failed to detect the individual. Fault might also be attributed to regulatory bodies like the National Highway Traffic Safety Administration (NHTSA) if their evaluations were not comprehensive or rigorous enough.
One could also argue that the creator(s) of Tesla’s AI might be accountable for permitting the release of code that could result in fatality.
This predicament epitomizes the nature of AI decision-making: Is someone to blame, or is everyone to blame? Pearce questions, “If all relevant parties responsible for the decision-making are assembled, where does the onus lay?” “With the C-suite? With the entire team? If accountability is mutual across the organization, nobody can end up in prison. Generally, collective accountability often transforms into no liability.”
Setting boundaries: Where does AI’s jurisdiction end?
So, where should organizations delineate the boundary? When does AI insights transition into human decision-making?
Three facets are pivotal: Morals, risk, and reliance.
“Concerning ethical quandaries,” Pearce affirms, “AI falls short.” This is because smart tools inherently aim for efficiency rather than ethics. Consequently, any decision entailing ethical predicaments should involve human supervision.
Risk, on the other hand, is an area where AI excels. “AI excels at risk assessment,” Pearce mentions. “Statistical models furnish a standard error figure, enabling an evaluation of whether the AI recommendation carries high or low potential variability.” This renders AI invaluable for risk-centric judgments such as those in the financial or insurance sector.
Lastly, enterprises must prioritize trust. “Trust in institutions is waning,” Pearce emphasizes. “Numerous citizens lack assurance that their shared data is being used reliably.”
For instance, under GDPR, organizations must be transparent about data collection and handling, allowing citizens the option to decline participation. To boost trust in AI usage, companies ought to transparently communicate the rationale behind their AI utilization and, whenever feasible, permit customers and clients to opt out of AI-powered processes.
Decisions, decisions
Should AI be entrusted with managerial decisions? Conceivable. Will AI be responsible for some of these decisions? Almost definitely. The allure of AI — its capacity to collect, correlate, and analyze diverse datasets to derive fresh insights — renders it a potent tool for enterprises to streamline operations and curtail expenses.
Nonetheless, the outcome of the shift towards managerial decision-making on accountability remains nebulous. According to Pearce, present circumstances engender “ambiguous boundaries” in this aspect; the legislation has not kept pace with the escalating AI utilization.
To guarantee alignment with ethical tenets, mitigate the risk of erroneous decisions, and foster the trust of stakeholders and clients, businesses are better off involving humans in the process. This might necessitate direct approval from staff before AI can act or periodic assessments of AI decision outcomes.
Regardless of the approach adopted by organizations, the fundamental message remains unchanged: In AI-driven decisions, there exists no clear demarcation. It’s a fluid target defined by potential risk, prospective reward, and likely outcomes.