Internal Audit & AI: finding the right balance w/ Tom Edwards & Chandrra Sekhaar

5 mins

Artificial intelligence is no longer a future consideration for internal audit. It is alread...

Artificial intelligence is no longer a future consideration for internal audit. It is already embedded across the business, reshaping how decisions are made, how controls operate and how risk is managed.

For audit leaders, that creates an unavoidable tension. Move too slowly and risk becoming disconnected from the business. Move too quickly and risk relying on something that is not yet fully understood.

So how should audit leaders navigate that balance?

For this edition of Behind the Controls, I spoke with Chandrra Sekhaar, an experienced internal audit leader who has spent his career working across complex, highly regulated organisations and who’s passionate about transforming internal audit with AI.


A career built across all three lines of defence

Chandrra’s perspective is shaped by the breadth of his career.

Currently Head of Audit at Public Investment Fund (PIF), Chandrra previously worked across 5 global banks (UBS, Deutsche Bank, RBS and Mizuho), and two of the Big 4, with experience spanning the first, second and third lines of defence. He began in FIC 1st line risk management, moved into 2nd line operational risk and has spent the last decade in internal audit, covering both technology and business audits.

That journey matters.

It means he understands how the business actually operates, how risk decisions are made in practice and how audit conclusions land with senior stakeholders. It also explains why he is cautious about simplistic narratives when it comes to AI.


“AI is not the future. AI is the present.”

One of Chandrra’s first observations sets the tone for the conversation.

“AI is not the future. AI is the present.”

His point is not that AI is perfect or fully mature. It’s that audit does not have the option of ignoring it, because AI is already everywhere. If audit is meant to follow the risks created by the business, then audit has to understand and engage with the tools the business is using, rather than treating AI as a future topic or a specialist add-on.

As Chandrra puts it:

"Audit goes where the business goes.”


Why audit leaders are investing in AI

When asked why an audit leader should invest in AI tooling, Chandrra is clear.

Efficiency, cost reduction and broader coverage all matter, but none of them are the real driver on their own. For him, the starting point is what audit is fundamentally meant to do: apply curiosity, professional scepticism and judgement to understand whether risks and controls make sense.

AI can support that if it is used well.

“It opens the whole world up at your fingertips, powered by the knowledge that AI brings to you.”

Used properly, AI gives auditors faster access to alternative perspectives, comparable scenarios and emerging risk patterns that would previously have taken far longer to identify.


From sample testing to full population testing

Chandrra sees a clear shift in expectations when it comes to how audit testing is performed, for example.

Sample testing has long been a practical necessity, but he believes that justification is weakening quickly.

“The modern auditor, the regulators or even the business wouldn’t care about sample testing.”

AI tools now make full population testing far more accessible, even for teams without advanced coding capability. Chandrra points out that many tools can now guide auditors through scripting and analysis, lowering the technical barrier significantly.

“If you don’t know Python, AI tools can tell you how to write the code.”

This is not about chasing the latest platform. It is about recognising that the scale and depth of testing audit can reasonably be expected to perform is changing.


AI as a second perspective, not a replacement for judgement

While efficiency gains are often the headline, Chandrra is more interested in how AI can enhance judgement rather than replace it.

Audit quality improves when auditors consider multiple perspectives, question assumptions and look beyond the immediate control environment. AI can support that by quickly surfacing external examples, comparable issues in other markets or alternative interpretations of the same data.

“The more perspectives you get, the better it is for the auditor to come to an informed judgement.”

This is where AI adds real value, according to Chandrra: not as an answer engine but as an input into better questioning.


The risk of over-reliance, especially for junior auditors

One concern I raise is the risk of auditors becoming too reliant on AI outputs, particularly earlier in their careers.

Chandrra is clear that this is a genuine risk if not managed carefully.

“It’s very easy to say the AI has done it, I’ve got nothing else to do.”

Professional judgement, scepticism and experience still matter. AI can work quickly, but it does not always work correctly.

“You still have to apply professional judgement to see whether it has done the right thing.”

His practical safeguard is simple: back-testing.

“Back test it and ensure the results match what you would expect.”


What AI means for audit team size

The question many audit leaders are quietly asking is what all of this means for team structures.

This is not a theoretical debate. It is already starting to play out in the market.

Several of the large professional services firms have begun reducing their junior intake, particularly at graduate and entry levels. The Big Four and other accounting firms are signalling a shift in how much junior capacity they need as technology and automation cover more ground more quickly.

The rationale is not difficult to understand. AI can process larger volumes of data, perform routine testing and surface anomalies at a speed that junior-heavy teams have traditionally provided through scale.

Chandrra believes internal audit follows a similar path.

“Five years from now, audit will not be what it is today.”

He does not believe audit functions disappear, but he does believe they become materially leaner as AI takes on a greater share of repeatable work. He also expects the way audits are delivered to change, particularly for smaller organisations.

Some firms that historically outsourced internal audit will be able to use AI-enabled tools to run significant parts of their audits themselves. Rather than periodic, point-in-time reviews, these tools lend themselves to more continuous, end-to-end auditing.

“It will be used mostly like a continuous auditing tool.”

When asked to put a rough number on the change in scale, Chandrra’s view is candid.

“I would say it could be a third of what is currently needed.”

For him, this is less about headcount reduction and more about a shift in the shape of audit teams. Fewer people focused on manual testing and coordination.

More people focused on judgement, interpretation and oversight of technology-enabled assurance.

That balance still matters, because not everything lends itself to automation.

“There are some very technical and complex areas where you still need auditors’ judgement.”

Those areas, he believes, will continue to define the value of internal audit as AI becomes more embedded in how assurance is delivered.


What gets automated first and what does not

Chandrra draws a clear distinction between standardised work and areas that require deeper judgement.

He sees AI being well suited to large parts of AML and KYC testing, settlement operations, confirmations, accounts payable and corporate function audits such as HR, for example.

“All the vanilla things can be handled much more easily.”

Where AI struggles, is with complexity and nuance.

Structured trades, bespoke transactions, derivatives and areas where documentation is long and judgement sits between the lines still require experienced human review.

“You won’t be able to code all possibilities.”


Governance, data and security become central audit themes

As AI use increases, Chandrra expects audit’s focus to shift further towards technology governance.

Two points came up repeatedly.

Data integrity and AI security.

“Garbage in, garbage out.”

Even the most sophisticated AI tool will produce unreliable outputs if the underlying data is incomplete or inaccurate. Equally, if the tool itself can be manipulated, reliance becomes dangerous.

He also highlighted the importance of evidence when relying on AI outputs.

“Take previous audits, send the same data through the tool and see what it produces.”

He gives a concrete example of what that looks like in practice:

“Let’s say you send 20,000 transactions through the AI and it comes up with ten exceptions. You then need to pick a few of those and trace them back. Would you have identified the same exceptions yourself? Are there more that the tool has missed? Are there fewer because it has over-filtered? You have to test whether it is even doing the right thing before you say, ok, we now rely on it.”

For Chandrra, this step is critical.

AI outputs should not be trusted simply because they are fast or sophisticated. They need to be challenged in exactly the same way an auditor would challenge the work of a team member.

Only once the results align with professional judgement, and only once that alignment is repeatable, does reliance start to make sense.

That discipline, he believes, will be one of the defining features of strong audit leadership in an AI-enabled environment.


The reality for smaller and less mature organisations

Not every organisation has clean, centralised data or mature infrastructure. And that’s the same for both small and big organisations.

Chandrra is pragmatic about this.

His advice is to start where confidence is highest and complexity is lowest, building experience gradually rather than attempting a wholesale transformation.

“Keep it simple and go at a pace you can manage.”

He also stresses that audit cannot move faster than the business itself.

“Audit cannot front-run the business.”

Where systems and data are fragmented, the business needs to address that first before AI-enabled audit can work effectively.


Accountability does not disappear

Looking further ahead, Chandrra acknowledges that there will inevitably be cases where organisations rely too heavily on AI and get it wrong.

When that happens, accountability remains clear.

“It is always the SMF5 who is liable.”

Technology does not remove responsibility. It changes how responsibility needs to be exercised.


A measured closing thought

Chandrra summed up his view with three principles regarding AI.

“Embrace it. Don’t fight it. Proceed with caution.”

AI is not something audit leaders can ignore. But it is also not something that should be adopted blindly. The challenge is to integrate it in a way that strengthens judgement rather than weakens it.


Over to you

How is your audit function approaching AI today? What feels like the biggest risk right now: data quality, over-reliance, skills, or governance?

Share your thoughts in the comments. I’d love to hear how your teams are preparing for what’s coming next.


Want more insights like this?

Subscribe to Behind the Controls to stay ahead of what’s shaping Audit, Risk and Compliance leadership, and how top professionals are navigating it.

Ready to get started?