Eye-opening ‘events’

An AI-driven video analysis platform highlights unseen dangers in the workplace, then magically mitigates the risk.

type
Article
author
By Noel Prentice, Editor, Boardroom magazine
date
18 Dec 2023
read time
4 min to read
Neon sign of a giant eye

A CEO tells the board they have shut down a facility because of a staggering number of ‘near misses’ or ‘events’, and workplace health and safety is at serious risk. No one had a clue of the dangers.

Artificial intelligence analyses the data, provides insights and says move the guard rails and bollards here and here, and, hey presto, problem solved. 

That is the magic of artificial intelligence, says Bede Cammock-Elliott, founder of seeo Limited, a technology company offering AI-enabled solutions that improve health and safety outcomes. “It’s is an incredibly powerful tool for knowing about, then managing workplace health and safety, and mitigating risks.

“Much of the talk about AI can be hype, but this is one that’s practical and working today, and it happens to be in an area that directors are personally and financially liable. So it is a game worth playing. It’s not perfect, but it provides incredible benefits.”

Bede says his AI-driven video analysis platform named seeo – Latin for ‘to know’ – can analyse hours and hours of CCTV footage, and identify near misses and behaviours that put both individuals and organisations at risk. It assesses them on a degree of severity and danger (called salience in ‘seeo talk’). Events can then be triaged into a health and safety management platform.

At an IoD Auckland branch meeting in October, Cammock-Elliott related a case of about 1,800 ‘events’ in a day with forklifts cutting through a factory safe zone, marked by bollards, where workers are allowed to stand. They were taking the fastest route from A to B. Seeo suggests moving the bollards and pallets a couple of metres, forcing the forklifts to behave in a different way.

The seeo team has taught AI to recognise objects in a scene, such as people, shipping containers, pallets, freight and material handling equipment. A second layer is what is called ‘rule logic’, which is ‘what are we looking for’? ‘What does compliance look like’? And compliance in this scene is people three metres from moving forklifts, or a person walking through a vehicle-only zone instead of using a walkway.

In this case study example, Cammock- Elliott says the company did not know there was a problem until the CEO and general manager of health and safety made sense of it. “Following some small work design changes, all of those events disappeared.”

He says you can imagine a board and a company believing they are “all good” with people and forklift segregation, and they have a great self-reporting culture. “Then you run some data and boom, there’s hundreds of events where they go, ‘oh my gosh, I had no idea’.”

He says directors should be requesting health and safety reports, and asking questions about how many events they have had. Why are they not dropping? Or why did this one drop? Management can then come and speak to that. The challenge is the data is often incomplete, or horrendously expensive to procure.

“One of the challenges for boards and executives when it comes to health and safety is ‘knowing’. How do they know how work is actually done? How do they know about action on the ground? Without AI-driven solutions, they never get to know.”
- Bede Cammock-Elliott

What solutions such as seeo can do, he says, is help directors know. It provides insights they have never had access to. If they visit sites and walk around doing critical risk observations the Hawthorne effect is normally in play, where people modify their behaviour because it is being studied.

“One of the challenges for boards and executives when it comes to health and safety is ‘knowing’. How do they know how work is actually done? How do they know about action on the ground? Without AI- driven solutions, they never get to know. They live in a ‘work as imagined’ world, whereas the health and safety risk occurs in a ‘work as done’ world. Seeo bridges the two worlds automatically, much like having a 24/7 auditor on every site.”

AI-based solutions do create ethical challenges. Cammock-Elliott cites another case where a company asked if they could use seeo to identify employees constantly leaving the processing floor to go to the toilet during a shift.

“I said, technically, yes, but ethically we would never do that. Ethically, you should not outsource that to a technology solution. That is a performance management and leadership issue, not a domain for technology and performance management.” 

Then there was a customer, he says, who was having a massive number of events at the same time every day. Because of a shift change everyone was converging from the warehouse, either to charge their scanners and RT radios, or workers were arriving for their shift and picking up the charged devices. They introduced three different charging tables to pick up and return the devices, instantly lowering their event count. It was a very simple work design change.

Cammock-Elliott mentions another director who confidently talked of forklift and pedestrian segregation being a ‘solved issue’ because of a multimillion-dollar investment in fenced walkways across their sites.

“I visited a site and proceeded to be guided in a roller door and walked diagonally across their floor, instead of using their walkway. We have a director who thinks they have solved the problem. That work is imagined. Conversely, we had an operational manager who did not use the investment – that is work as done.

“There can be a fundamental issue and a misalignment between directors and CEOs who are the ones that are accountable, who drive efficiency through their organisations via the use of profit and loss centres to line managers. The challenge is the line manager maximises cost efficiency, but does not carry the can for health and safety outcomes.”

On a regulatory level, he says New Zealand as a whole is miles behind the eight ball in terms of guard rails around AI. “We have limited guard rails at best. There’s bountiful ways you could use this unethically. Other countries have centres of excellence around AI.”

He also downplayed the hype of language models such as ChatGPT, saying they are not intelligent as we understand intelligence. “It can’t predict the future. It can’t generate new information. That is not to say it is going to impact the economics of creative businesses because it has.

“ChatGPT has been helpful in terms of raising the whole AI conversation and getting into people’s consciousness, but the hype versus reality can dampen their level of interest.

“AI is not just a threat. It is a massive opportunity for a country like New Zealand,” he says, pointing to the importance of creating a national goal to reduce the weight of our exports through a maniacal focus on creative, software, data and AI applications, among others.