How aware are you?

AI may have already infiltrated your organisation – the broad ramifications demand a level of digital literacy from all directors.

type
Article
author
By Noel Prentice, Editor, Boardroom magazine
date
18 Dec 2023
read time
3 min to read
robot hand clicking a computer yellow mouse

Technology entrepreneur Angie Judge says AI disrupts at the heart of a board’s accountability on both strategic fronts: opportunity and risk.

The CEO of Auckland-based Dexibit, which provides data analysis into visitor behaviour at museums and other cultural institutions, also warns that AI is not a futurist change: it is here transforming the workplace and soon, the workforce.

“While many recent technology trends like NFTs [non-fungible tokens] may have proven more hype than reality, this tectonic shift promises to be as significant as the innovations of computing, the internet or social media,” she says.

“What makes this particularly challenging is AI is so broad in its technologies, applications and ramifications for companies through their products and operations. In many organisations, AI will have already infiltrated the company, with boards potentially unaware of where, how and what risks this presents.”

In the race to embrace AI, Judge says the first question for boards is the impact on company strategy.

“For some, AI may prove to be a formidable and almost unavoidable existential threat, just as search was to directories, streaming to cable, and cloud to software. For others, it will pose a precarious balance of how much to invest, and where and when as the world navigates the discoveries and realities of AI in commercial settings.”

Unsurprisingly, technology companies have been the first to fly the ‘now with AI banner’ and add AI-enabled products and features, she says. “What remains unproven for many is whether their users want to use or trust an AI-enabled widget, or whether the technology is fit for purpose.”

For non-technology companies, she says, AI can form part of the customer experience or back office operation. Customer services, finance, marketing and human resources are prime targets. Examples such as forecasting using machine learning, evaluating customer feedback using natural language, or automating customer messaging through generative AI are all opportunities where routine tasks can be done more efficiently and potentially even better than a human hand.

Roles such as marketing might use generative AI to draft content, or engineers to improve code. Judge says this presents two different fronts for companies to manage: enough education and encouragement for individuals to get the most from AI to make them more efficient and effective, but enough guard rails to protect the company from the risks of this type of use.

“Staff need to be trained and managed for the risks of AI use, such as those around sensitive data, intellectual property and copyright, plus the tasks for which certain types of AI is inappropriate for,” she says. “In uninformed or inexperienced hands, generative AI, in particular, can create a false confidence in the technology’s ability, then used without proper checks on its accuracy.

“Boards should assess the expertise and experience of their wider leadership team to successfully navigate AI strategies and governance.”
- Angie Judge

“AI use in the workplace will also likely exacerbate the growing tension for employees who believe their remuneration should be based on outcome rather than effort. If AI saves an employee an hour a day – an employer’s natural expectation is for that time to be used elsewhere for the company where some individuals may expect that time to be returned to themselves.”

Judge says it is important for companies to manage additional risks from unintended consequences, such as the potential for discrimination, abuse and fraud. “For example, when machine learning is used to screen employment candidates, it may present as more objective than an inevitably biased human eye. However, if the training data behind the technology is also biased, this can then reinforce discrimination through the technology itself.”

She says boards also face a changing regulatory landscape, especially for those operating internationally, as governments attempt to legislate or at least introduce voluntary codes of conduct for ethical and societal implications, including complex demands for transparency and explainability in a world where most AI operates as a ‘black box’.

Judge says boards must stay apprised of these rapidly evolving technologies, their applications and the market’s reception. While a diverse board with specific expertise is a start, the broad ramifications of AI demand a level of awareness in digital literacy from all directors.

“Boards should assess the expertise and experience of their wider leadership team to successfully navigate AI strategies and governance. Together, the company’s management will need to quickly and continuously extend risk management policies, processes, procedures, practices, accountability and training for a world ever changed by AI.”