April 28, 2024

AI and automation: Have you considered the risks?

‘AI is revolutionising how work is organised, from blurring traditional job roles to introducing novel workflows’.

THE automation bandwagon has been building up momentum over the years, and the rapid increases in the effectiveness of artificial intelligence (AI) is increasing its speed.

After all, what’s not to like? Algorithms take over repetitive processes from overstretched humans and because they never need coffee or have family or health issues, they are much more reliable and accurate. The humans are freed up to do what humans do best — make connections, build relationships, develop strategy, and come up with new ideas that might be totally counterintuitive but nonetheless change the game.

As AI approaches closer to the Holy Grail of generative AI — that is, AI that can mimic more accurately these capabilities, which we have always supposed to be markers of our human superiority — automation is able to take on more complex and important tasks.

A recent Gartner paper predicts that AI will become progressively more embedded in what we call knowledge work. By 2026, workflow tools and agents will drive efficiencies for 20% of knowledge workers (compared to less than 1% today), and 30% of new applications will use AI to drive personalised, adaptive user interfaces (up from less than 5% in 2023). By 2028, conversational applications powered by large language models will provide advisory and intervention roles for 50% of knowledge workers, in stark contrast to the 5% in 2023.

All these positives should not blind us to the very real risks inherent in automation, and AI-powered automation more particularly. The more business processes get automated, the more reliant we become on AI — and thus the greater the risk AI represents.

Here are some of these risks, and it’s likely you may have missed some of them:

IT risk

AI and automation are technologies, and like all technologies they come with the risk that what the technology team puts in place does not fully correspond with what the business wants. And even if the automation is perfect, there is a tendency to see it as something that is done and dusted — something one author calls “automation complacency”. In fact, business environments are often both complex and fast-changing, and processes (and thus their automations) may need to change again and again.

In general, automations need to be constantly improved, and it is wise to make it easy for users to provide feedback that is integrated into the improvement cycle.

A related risk is the automation of security policy controls. If these automated policy controls are not revisited in line with changing business requirements, they can act as a brake on efficiency.

Conversely, automation might affect the process itself in ways that could create unforeseen risk.

Automation increases the business’s already-significant reliance on, and identification with, its technology. The ever-present, and worsening, risk from cybercriminals does not affect just data but also automated decision-making.

Supply/ value chain risk.

As automation spreads across the value chain within the company and then across its supply chain, largely driven by the increasing use of AI, the potential for unexpected (and negative) outcomes rises. For example, if one system “chats” to a supplier system to know when to expect a delivery, and the information is incorrect, a series of knock-on disasters could result.

Privacy risk

When AI enters the automation (or any other) equation, we should start to recognise increased regulatory risk relating to privacy. AI has an insatiable appetite for information; the more it has, the better it performs. Much of the most valuable information will fall foul of privacy regulations like our homegrown Protection of Personal Information Act (PoPIA) or the European Union’s General Data Protection Regulation (GDPR).

Decision risk

As decision-making becomes more and more automated in line with the advancing capabilities of AI, the business risk rises. This is particularly true when it comes to companies where AI-automation has high financial stakes. Examples would include banks that make investment decisions or approve/ reject loans. Finance and other heavily regulated sectors need to prioritise the review of all their processes in light of the regulations, with a special emphasis on automated processes.

Like any computer programming, AI is only as good as the algorithm it uses. Great care needs to be taken to ensure that the algorithm does not itself contain bias towards a certain way of thinking or a certain type of information — bias that could affect the validity of the business decision-making process. An obvious example would be the current furore over the intrusion of social justice dogma into the algorithm of a prominent technology platform’s AI, which have made it the laughingstock of the world. In business, robust data validation processes need to be in place to ensure that the AI is using correct information.

In conclusion, it’s clear that automation offers massive potential benefits both to companies and their employees, especially as AI comes to play a more prominent role. However, the obvious benefits can mask several risks that must be recognised and mitigated. The review and mitigation process can be materially advanced as the internal audit function is automated, freeing up auditors to analyse the information and suggest improvements.

Zakariyya Mehtar is director of IT Assurance at Mazars in South Africa.

Author

Leave a Reply

Your email address will not be published.