Boardroom Premium
Treating emerging tools as tech upgrades misses the point – and the deeper opportunity to rethink how we create and deliver value.
We’re in the “post-enthusiasm” wave of Generative AI (GenAI) or, as Gartner calls it, the Trough of Disillusionment.
After the hype, the reality has landed: most pilots are stalling, leaders are frustrated and returns are underwhelming. This isn’t because AI doesn’t work – it’s because we’re treating it like an IT initiative rather than what it really is: a business transformation.
We’re using powerful tools to do small things (drafting faster emails, automating admin), while missing the deeper opportunity to rethink how we create and deliver value. When AI is applied only at the edges of the organisation, it delivers marginal gains. Not only does this leave employees disillusioned, but it also creates risks and increases organisational complexity.
Encouraging experimentation with AI sounds like a good idea, but in the absence of clear strategic direction, employees end up using whatever AI tools they can get their hands on. This “shadow AI” phenomenon is not just a compliance risk – it’s a canary in the coalmine. If 20 employees each create separate agents to replace 20 different flawed processes that accomplish the same task, the result won’t be improved productivity but rather increased risk, complexity and technical debt.
When everyone is running pilots without coordination or alignment, you end up with a mess. On the other hand, when AI is applied at the core, it can drive exponential change. But getting to that point requires strategic clarity, cultural readiness and most of all, robust governance.
Too many organisations have embraced the “10,000 flowers” problem – multiple AI experiments blooming in silos, with no clear connection to business outcomes. This isn’t dissimilar to the previous wave of digital transformation: leaders uncertain about the path forward adopted a similar strategy, encouraging widespread innovation and experimentation with digital tools in hopes of discovering unicorn-level returns.
However, without clear connections to actual business opportunities (and the problem you are solving), this approach resulted in numerous unfocused, under-resourced teams that failed to produce scalable results. Disappointed by these poor outcomes, many leaders concluded that digital experimentation was fundamentally broken and abandoned their initiatives altogether.
Yet leaders cannot afford to make the same mistake with AI – it’s here to stay and not adapting to AI risks your organisation being displaced or disrupted. Yet adopting AI is not the same as adapting to AI.
The key questions to ask when evaluating AI use cases:
The difference between a pilot that dies in committee and one that scales across the organisation is governance. It’s the often-overlooked layer that connects experimentation with impact.
At its core, adapting to AI requires strategic alignment (AI must tie to core business goals), board ownership (because AI cannot be delegated to IT or innovation teams alone), and prioritisation (using frameworks that assess business value, feasibility, adoption potential and ethical risk). Without these layers, even the most promising pilots become science projects: technically impressive but commercially irrelevant.
Unlike other technology rollouts, there is a need for both top-down directives and bottom-up innovation to bubble to the top.
Because GenAI has democratised AI and made these tools accessible to all employees (whether your corporate policy allows it or not), boards need to navigate the tension between individual GenAI use and organisational priorities. This requires organisation-wide upskilling, clear communication and buy-in to an AI roadmap.
Unlike other technology rollouts, successful AI adoption requires both top-down strategic direction and bottom-up innovation working in tandem. Because GenAI has democratised access to AI tools, employees are already using them (whether corporate policy permits it or not), creating an urgent need for leaders to navigate the tension between individual experimentation and organisational priorities.
This demands distinct approaches at each level: individuals need training and permission to experiment with context-appropriate tools; teams require governance frameworks, feedback loops, and the ability to share successful use cases; and organisations must provide central oversight that ensures strategic alignment, security and compliance while fostering grassroots innovation. Without this coordinated, multi-level approach, AI initiatives fragment into inconsistent implementations that fail to deliver enterprise-wide value.
Finally, boards should not over-index on GenAI, especially when looking at organisation-level AI projects. For a large share of organisational problems, especially those on structured/tabular data, classical models deliver better accuracy, lower cost, faster latency and cleaner governance.
Meanwhile, core GenAI risks (hallucinations, opaque reasoning and data-security exposure) are mitigated, not solved. Use GenAI where the problem is genuinely open-ended and language-heavy; otherwise, pick fit-for-purpose traditional AI and, when needed, compose hybrids that keep GenAI at the edge (UX/extraction) and classical models at the core (decisioning).
Trying to govern AI with quarterly reports and waterfall thinking doesn’t work in an exponential world. Strong governance isn’t about slowing things down; it’s about building guardrails that enable speed, experimentation and scaling without chaos. And the good news is that the skills board members excel at (strategic thinking, risk management, stakeholder communication and crisis navigation) are exactly the skills AI teams need to get from proof of concept to scale. Board members don’t need to be AI engineers, but they must be AI-literate and willing to engage deeply.
If you’re on a board or leading an executive team, this is your new responsibility. You don’t need to run the algorithm – but you do need to ask: Is this solving a real problem? Does it tie to value? Can it scale? Is it governed well? Governance is not the brake on AI. It’s the steering wheel.
Elsamari Botha MInstD
Elsamari Botha MInstD is an Associate Professor at the University of Canterbury and Director at Future Forge Ltd. Her work focuses on the adoption, governance and monetisation of emerging technologies in organisations.