‘Shadow AI’ signals a new digital blind spot

Unmanaged AI use is creating hidden risks. Boards must act now to bring shadow AI into view — and under governance.

type
Article
author
By Institute of Directors
date
9 Jul 2025
read time
2 min to read
‘Shadow AI’ signals a new digital blind spot

As artificial intelligence tools become more accessible, a new governance challenge is emerging: shadow AI.

This term describes the use of AI technologies – such as today’s ChatGPT or image generators – by employees without formal organisational oversight or approval. Like shadow IT before it, shadow AI arises from a desire to improve productivity, but its unmanaged nature presents both risks and opportunities for boards to consider.

The most pressing concern for directors is that shadow AI can operate outside the boundaries of existing cybersecurity, privacy and risk frameworks. Staff may input confidential or sensitive data into third-party tools without realising that information could be stored, used for model training or exposed through security vulnerabilities.

And by the end of 2025, exceptionally more powerful generative AI tools will be readily accessible and inevitably adopted by staff, introducing new reliance on unverified AI use. Unintended consequences from unmanaged AI use pose serious risk for directors.

The absence of visibility and accountability over these tools means boards may be unaware of their usage until a compliance breach or incident occurs. This undermines directors’ obligations to ensure effective internal controls and oversight of material risks.

However, despite these risks, shadow AI reveals a positive undercurrent: staff are becoming more willing to innovate and streamline work. In many cases, employees turn to generative AI to draft emails, summarise documents or support coding tasks.

When surfaced and properly governed, shadow AI activity can inform broader digital strategy. It can highlight use cases worth formalising, or flag areas where teams lack tools that meet their evolving needs. Rather than banning such tools outright, forward-thinking organisations are choosing to build governance frameworks that support safe, strategic AI use.

Boards must continue to proactively oversee how AI is being used across the organisation – intentionally or not. This includes:

    • Confirming AI policies and risk controls are in place
    • Ensuring the executive team has visibility over informal AI use
    • Supporting training for staff and directors on responsible AI practices
    • Aligning AI use with the organisation’s data governance and strategic objectives

Shadow AI is here. Whether it becomes a liability or an asset depends on how swiftly boards respond.


The IoD is hosting two upcoming AI Governance Essentials courses in Auckland (5 August) and Wellington (17 September). The one-day course helps directors understand how to oversee AI safely, strategically and in line with good governance practice. Learn how to ask the right questions, manage risk and support responsible AI use across your organisation.