picture

Invisible work, visible risk: what boards are missing about AI

Why AI’s biggest governance risks are emerging through people, not technology.

author
Helen van Orton MInstD
date
12 Mar 2026

Artificial intelligence is already reshaping how work gets done in organisations. Much of that change is happening quietly, embedded in workflows, decisions and outputs that boards rarely see or interrogate.

That invisibility is an AI governance issue because it shifts capability, risk and accountability without a corresponding shift in oversight.

In the past week, Matt Shumer, CEO and co-founder of OthersideAI, saw his essay Something Big Is Happening go viral on X (more than 85 million views in five days), arguing that newer AI models are compressing screen-based work fast and that entry-level knowledge roles may be reshaped first.

Boards do not need to agree with every forecast to take the implication seriously. If even part of this direction of travel is right, it creates a workforce risk and a governance design question: what happens to your talent pipeline, your controls and your succession plan when foundational work changes?

As chair of a people and culture committee, I’m seeing first-hand how quickly AI is changing roles, expectations and capability inside organisations.

Many boards still treat AI as a technology issue: a software investment, a cyber security consideration or a productivity lever. Important, but incomplete, because material risks sit on the people side: capability, fairness, culture and the sustainability of future leadership.

A study by Anthropic, the US-based AI company behind Claude, last year found early adopters reporting major gains, with AI saving time and improving quality. But Deloitte’s global boardroom survey last year found 45% of directors and executives said AI was not yet on the board agenda.  

The Australian Institute of Company Directors’ research in November 2025 suggests many boards are still early in their own adoption, with collective board use lagging ‘shadow’ individual director use. 

When the workforce accelerates and the board lags, a governance gap opens up.

Directors don’t need technical depth, but they do need enough grounding to govern confidently and to recognise where risks and opportunities are shifting, particularly where the people impacts are showing up first.

The vanishing entry-level pathway

Generative AI is eroding entry-level roles. Microsoft research has highlighted that many tasks in roles like analysts, writers and customer service agents overlap heavily with generative AI capabilities. This week, Microsoft AI CEO Mustafa Suleyman predicted that most white-collar tasks could be automated within 12 to 18 months.

Those roles have traditionally been a training ground for judgement and context. If they thin out, organisations lose a core development pathway. Boards focused on succession should be asking where tomorrow’s leaders will come from, and what replaces ‘learning by doing’.

Invisible work and outdated performance assumptions

AI is already powering quiet, invisible workflows. Outputs are faster and more polished, often without any signal that AI was involved. If leaders do not understand how performance is being achieved, decisions about workload, resourcing and reward may be made on outdated assumptions about effort, time and value.

If managers are not resetting expectations and re-scoping what “good” looks like, organisations risk confusing speed with depth. In a world where AI can halve the workload, are we asking enough of our people, or still measuring them by how long something used to take?

Boards also need to decide what happens to the capacity AI unlocks. Will it be reinvested in innovation and growth, or extracted through cost reduction? Either approach may be valid, but it is a strategic decision and therefore a board one. 

The AI fluency divide is now a governance issue

Reskilling and upskilling can sound reassuring. However, access to tools and training remains uneven. Use often concentrates among a confident minority, while others lack permission, confidence or clarity about where to start. The result is a two-speed workforce.

Fluency is the ability to govern: to see where AI is embedded, understand what is changing in work, and ask management for evidence on impact and control. 

Unequal impact: AI and workforce equity

The fluency divide is not evenly distributed. International Labour Organization analysis suggests women’s jobs, particularly in clerical and administrative work, are more exposed to AI-driven task change. Harvard research suggests women are adopting AI tools at lower rates, often due to access, permission or confidence.

If adoption tracks unevenly by gender, age or role, organisations risk narrowing future leadership pipelines and entrenching inequality. Boards should expect adoption and outcomes to be tracked by cohort, with actions to close gaps.

Bias in the hiring and promotion loop

AI is increasingly used across recruitment and talent processes, including screening CVs, ranking candidates and, in some cases, informing promotion decisions. These systems are trained on historical data that often reflects past bias. Amazon’s abandoned AI recruitment tool that favoured male candidates is one well-known example, but harms can also show up through penalties for CV gaps, non-linear careers, caregiving histories or non-Western names.

In Australia, research and public-sector reviews have flagged disproportionate impacts from automated screening, psychometric testing and AI-scored video responses for marginalised groups when tools are not validated across cohorts.  

In the United States, enforcement actions have referenced age-based auto-rejection and alleged race and disability bias, with some employers already paying penalties.

Boards should be asking where AI is influencing employment decisions, what testing has been done for disparate impact, and what human review and appeal processes exist.

What boards should be asking now 

    • Where is AI embedded (formally and informally), and what is our minimum board AI fluency standard for oversight this year?
    • Where is AI influencing recruitment, promotion and performance decisions, and what evidence do we have that bias is being identified and addressed?
    • What is changing in entry-level pathways and performance expectations, and how will we use the capacity AI unlocks: invest, innovate or cut?

AI may be absent from board papers, but it is already reshaping how work is done, how leadership develops, and how fair an organisation truly is.

I co-authored this piece using AI. Because fluency matters. Governance, however, still requires human judgement.


Helen van Orton MInstD is CEO of  Directorly Ltd, which provides board-specific AI governance training delivered in the boardroom and tailored to an organisation’s context and current capability. 

The views expressed are those of the author and do not necessarily reflect the views of
the Institute of Directors.