Trust, risk and the quiet rise of AI in your organisation

KPMG’s latest research offers directors a clearer view of how AI is being used – and what’s slipping through the cracks.

type
Article
author
By Susan Cuthbert, Principal Advisor, IoD
date
16 May 2025
read time
3 min to read
Trust, risk and the quiet rise of AI in your organisation

A new global report from KPMG and the University of Melbourne, Trust, attitudes and use of artificial intelligence, puts a spotlight on a growing challenge for boards.

Boards are under increasing pressure to understand how AI is being used inside their organisations – and whether that use is safe, consistent and well-governed. KPMG’s report brings valuable data to that question while also looking more broadly at public perceptions of trust. It draws on responses from over 48,000 people across 47 countries, including New Zealand.

Globally, 66% of people say they use AI regularly. In New Zealand, that figure is less, at 50%. But what’s really interesting is that, while around half of people globally trust AI systems, New Zealand ranks among the lowest globally on acceptance, excitement and optimism. Only 34% say they trust it. 

The report defines trust as a willingness to rely on AI and to share information with it – in other words, whether people believe the systems they’re using are safe, accurate and fair. And many people don’t.

Inside organisations, AI is in daily use – frequently without oversight or any structured training. Staff are using generative tools in their work, often relying on output without checking it. According to the report, most don’t tell their managers they’re using it. It also shows that even where organisations have policies in place, staff often don’t follow them. Thirty-four percent of staff have used AI in ways that contravened policies and guidelines. 

That gap between use and oversight creates a raft of risks – whether it be staff unintentionally sharing confidential or commercially sensitive information with external AI platforms, to breaching others’ intellectual property, relying on unchecked outputs in board papers or reputation issues. It also raises questions about whether staff are building the judgment and skills they need to assess AI-generated material critically.

I spoke with Cowan Pettigrew, Chief Digital Officer at KPMG, about what this means for boards. He pointed to something the report shows clearly, but many boards may not have recognised.

“Every organisation already has an AI risk profile,” he said. “Even if the board hasn’t formally approved the use of AI, it’s happening. So the question is: do you understand what that profile looks like, and are you governing it properly?”

Cowan described three conditions that compound the risk: AI is being used, most staff haven’t been trained, and there’s little or no formal oversight. When those three things come together, the organisation is exposed. And for many boards, that’s already the case.

So what should directors be thinking about?

Cowan says the first step is putting a clear policy in place. Define what’s acceptable, what’s not, and where human oversight is needed. Then focus on education. The report shows that training makes a real difference – staff are far less likely to make mistakes once they’ve been shown how these tools actually work. Ensure these two key areas are supported by operational processes that make it easy for staff to understand and engage. 

The next step is transparency. Trying to explain the algorithm itself isn’t useful when the models are constantly changing. What matters is being upfront about where and when AI is being used – whether in customer communications, content generation or decisions. 

In New Zealand, 76% of people are concerned about negative outcomes from AI, and 81% believe it should be regulated. These figures highlight why organisations need to be clear, open and proactive in how they use AI if they want to build trust. 

Only then should boards turn their attention to platforms. “If you have your staff using an AI platform that doesn’t meet your governance requirements, it can be hard to unwind,” Cowan said. 

Directors should understand which platforms are in use, what data is going into them, and whether those choices align with the organisation’s risk appetite and long-term intent.

Trust in AI may currently be low, but that doesn’t mean trust can’t be built. It means that trust, like everything else in governance, needs clarity, consistency and leadership. The organisations that build that trust now are the ones most likely to realise the value AI can bring.


The Institute of Directors offers a suite of practical resources to support directors in strengthening their governance of artificial intelligence. These include A Director’s Guide to AI Board Governance, a targeted course on AI Governance for Boards, and access to recordings from the AI Forum sessions. Together, these tools are designed to help boards build confidence, ask the right questions and lead responsibly in an AI-enabled future.

KPMG also provides a range of thought leadership and insights on AI and emerging technologies. You can explore these here