Chaos in the risk universe: lessons from the 3 Body Problem

Three colorful spheres of varying shapes float in a blue sky, creating a whimsical and vibrant atmosphere.

Elegant frameworks often fail when strategy, risk and human behaviour collide. Can we fix it?

author
KEVIN JENKINS CMInstD
date
13 Oct 2025

OPINION: The 3 Body Problem novelises a classical physics problem: modelling the motion of three particles orbiting each other is almost impossible, producing effectively chaotic motion.

Liu Cixin’s novel came to mind when I identified three major problems with traditional organisational risk management after leading inquiries into management failures. Often, elegant risk management regimes are disconnected from core management. I started to wonder if risk management has an empirical base, or if it’s just paperwork designed to make us feel like we’re doing something?

I read a lot, including The Failure of Risk Management – Why It’s Broken and How to Fix It by Douglas W. Hubbard (2009).

I also connected with experts Erica Jenkin, a fractional risk consultant; Ben Stevens, founder and CEO of risk and strategy platform flipview.co.nz; and Gavin Pearce, former director of risk and compliance at RBNZ.

Hubbard summarised key insights well: “It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.” (Carl Sagan).

I found a soulmate when Hubbard asked:

    • “Do any of these risk management methods work?
    • Would anyone in the organisation even know if they didn’t work?
    • If they didn’t work, what would be the consequences?”

A lot of risk management reporting is tedious. Spreadsheets, heatmaps and other brightly coloured templates make you hunt for insights like you’re solving a puzzle. I’ve seen 200+ pages of risk appetite statements all written in difficult-to-measure gobbledegook.

That’s one reason why risk management is often the last item on the agenda: it lets people say, “We’re short of time, let’s do this quickly”.

In one large organisation we were presented with a 10-page A3 colour-coded spreadsheet in small font. The top two lines were red, suggesting there was a 100% chance of a worker being killed. Unacceptable – but the external risk advisor hurriedly explained that related to the ‘untreated risk’ and, in fact, there were lots of ‘mitigations’ in place.

So why was it red? I pointed out our mortality rate would also increase if we didn’t have stairs between the floors of our high-rise, but thankfully we’d ‘treated’ that risk, too. A closer look at the spreadsheet also showed a jumble of underlying risks, repetition and confused groupings. The estimates of likelihood and impact were “intuitive” (i.e. guesses), and most mitigations were generic jargon.

A big part of the problem is presenting qualitative judgement as empirical or measuring the wrong things. How many times have you seen a 0-5 scale where the ratings emerge from a discussion at a committee (“I reckon it’s a bit higher than 3.5”)?

Hubbard goes to town on this (p17): “. . . risk management has failed . . . for at least one of three reasons . . . (1) the failure to measure and validate methods as a whole or in part (often no experimentally verified evidence); (2) the use of components that are known not to work (they contain serious errors and biases); and (3) the lack of use of components that are known to work (not using available tools)”.

Earlier (p13) he asks what he considers the most important question about risk analysis (and my favourite question): “how do you know it works?” I’ve asked some risk advisors this and the answers are often revealing – if the risk doesn’t manifest then the answer is that the mitigation worked; if the risk does manifest, then the answer is, “I told you so”.

Other Hubbard suggestions are asking: “Would you call this approach scientific?” What would an insurance assessor make of it? Would a bank invest?

“Risk conversations can be dull because they are often framed incorrectly – i.e. risks are scary and must be ‘mitigated’. But we know there is no reward without risk. A calculated risk is our friend, not something to be avoided.”
Bias

It’s hard to think of an environment better designed to engender human bias. Behavioural economics may have overpromised, but its core insights still apply.

A board sets an enterprise risk appetite and brainstorms the top 10 corporate risks (which are invariably 90% the same as those you would find when searching online). They assign a number out of five after considering likelihood and impact, and identifying mitigations. Workshops with frontline workers identify a list of practical risks. A middle management risk committee reconciles the two, and a final risk management plan emerges for board approval.

Why wouldn’t optimism bias be as rampant in estimates of the likelihood of a risk manifesting and the effectiveness of any mitigation as it is in, say, estimates of sales growth?

Framing bias tells us we will favour a business case with a 90% chance of success over one with a 10% chance of failure. Further, prospect theory tells us we are more influenced by the possibility of a loss than the prospect of an equivalent gain – we are programmed to be risk averse, so do we take that into account?

Anchoring bias is one of our more bizarre human foibles where we make numeric assessments based on the last number we heard. That’s why we tend to ask: “Can’t we do this IT project for $50m rather than $60m”, rather than asking “can’t we do this for $5m” or “remind me why we need to do this IT project?”

Omission/commission bias, where it’s easier for us to not act, suggests we have a bias towards avoiding a decision by asking for more information, missing the opportunity window.

Solving the problems

Risk conversations can be dull because they are often framed incorrectly – i.e. risks are scary and must be ‘mitigated’. But we know there is no reward without risk. A calculated risk is our friend, not something to be avoided.

One definition of governance is “governance = strategy + risk”. Strategy is about making choices, and each choice changes your risk profile. So, our conversations about strategy should also be conversations about risk. If “operational risks” are important enough, they will derail progress and should be part of those same conversations.

Gavin describes risk as the effect of uncertainty on your objectives. Ben is a fan of management guru Richard Rumelt and the importance of identifying ‘obstacles’ – and how not doing so means your strategy is just a ‘pipedream strategy’.

The idea of ‘risk appetite’ sparks debate. Some see it as the starting point, aiming at a few metrics per risk to get boards and executives to say what they are willing to accept or do. Others argue that it’s where the waffle starts. Maybe the lesson is: ‘Good idea, let’s keep it sharp’.

People are using better analytical tools, such as VUCA (volatility, uncertainty, complexity and ambiguity – a model developed by the US military to cope with the ‘fog of war’) – or the Cynefin framework with its five “decision-making contexts” (clear, complicated, complex, chaotic and confusion), developed by Dave Snowden when at IBM.

Big projects expert Bent Flyvbjerg reckons a major problem with assessing the risks of IT projects is the assumption that the likelihood of risk reflects a bell-curve (regression to the mean). The opposite is true – i.e. a regression to the tail: we need to look closely at failed projects and expect ‘black swan’ events. The rise of data science, albeit with all its overpromising, means there is greater hope for empiricism – and a way to combat our innate biases.

Ben questions risk registers – he says reporting should focus on what’s been done to address the big risks, who’s working on them, and whether resilience has been built. Gavin calls for techniques beyond risk workshops, registers and heatmaps, such as stress and scenario testing and ‘pre-mortems’.

Erica recommends hiring experienced critical thinkers into risk roles – people driven by outcomes, not process (otherwise customers or shareholders bear the cost), who call out over-engineered frameworks and reporting, and look for gaps between the board, executive, regulators, other stakeholders and your risk appetite. People, behaviour (risk taking) and culture (speaking up) are most important.

Gavin says risk management isn’t about managing the risks of an organisation, it’s about managing an organisation with risk in mind.

The 3 Body Problem moved into a new dimension when it became a Netflix series. I’m hoping risk management will continue to shapeshift into a more sophisticated dimension of core management and governance.


Kevin Jenkins CMInstD is a professional director and commentator. He chairs REINZ and NZQA, and is on the board of Harrison Grierson, Accessible Properties, WorkSafe and BRANZ.