Member profile - Adding new governance and management roles is currently unavailable. We are working to resolve this issue. 

Directors embrace AI opportunities

type
Article
author
By Institute of Directors
date
16 Jun 2023
read time
4 min to read
two hands one human one robot

Directors have an optimistic view of artificial intelligence (AI) with 72 per cent saying it brings more positives than negatives, and with just 10 per cent concerned it could be detrimental.

An Institute of Directors (IoD) AI Pulse Check survey in June found 58 per cent of respondents already have AI at work in their organisations and a further 7 per cent have plans underway to introduce it.

However, almost the same number, 63 per cent, of respondents acknowledged their boards did not have the skills needed to “lead their organisation into a digital future” with AI.

While the AI Pulse Check only reflects a small sample of IoD members, it provides interesting insights into how directors feel about a technology that is touted to either revolutionise work and society, destroy work and society, or something in between, says Dr Michael Fraser, general manager at the IoD.

Dr Michael Fraser

Dr Michael Fraser

“What stands out to me is the alignment with our Director Sentiment Survey 2022, which found very little concern at the potential downsides of new technology. In the Sentiment Survey, technological disruption was only perceived as the biggest risk facing organisations by 8 per cent of directors.

“Where AI is perhaps ahead of the pack is that this Pulse Check shows 65 per cent of respondents are already using, or a planning to use, AI to some degree. It is seen as a valued business tool.”

Directors acknowledge AI could come with risks, with a majority (60 per cent) of respondents advocating for government to legislate on the use of AI, versus 12 per cent opposing the prospect of AI legislation.

Just 22 percent of respondents had considered whether their organisations needed a policy to manage AI risks.

“One element behind the fear around AI is the speed at which this technology is evolving. So, directors need to supercharge the pace of their thinking in order to keep up,” Dr Fraser says.

Conversations on AI are critical for boards right now, he says, likening AI to the pinball in a giant pinball machine.

“Where is it going next? If something changes in the landscape, the next step is to figure out what that means for your organisation and assess where things are going.”

Views from the boardroom

Sheridan Broadbent and Mitchell Pham

Sheridan Broadbent CMInstD and Mitchell Pham

Sheridan Broadbent CMInstD, an independent director, says there are huge opportunities for AI and generative AI in business and society, but adds that it does come with challenges for boards – the first being to get your head around the technology.

“My own view is that this is not something to outsource to experts, as you might with legal advice. You need to have a sense of, roughly, how it works what the risks and opportunities are,” Broadbent says. 

“Those who are good at understanding and managing the upside and downside risk of generative AI will win, and those who don’t will struggle. For me, it represents a net upside opportunity, but you need to be really on your game as a director to navigate your strategy appropriately.”

Independent director Mitchell Pham also feels there are opportunities for businesses to use AI but that, importantly, boards need the governance capability to ensure “safe and responsible” use to maximise the benefits and minimise the potential for harm. 

“Some of the risk discussions should include data privacy and confidentiality, AI models and output biases, false and inaccurate answers, intellectual property (IP) ownership and copyright, cybersecurity and fraud, customer/consumer protection,” Pham says. 

Pham notes three areas that need to be addressed for AI to be a positive force. Firstly, governments should develop AI policies and regulations that are internationally consistent and do not stifle innovation. Secondly, technology developers must self-regulate by building safety into AI technologies, including how they are developed, delivered and maintained. And lastly, businesses must self-govern and provide guardrails for their employees and customers to ensure that AI is used safely and responsibly.

Dr Fraser says there are enough tools within the governance framework for boards to navigate AI effectively.

“There will be vehicles for directors to have those conversations and add to their technology governance capability, including through their succession planning or capabilities matrix as a board,” he says.

The importance of coherent AI policies

Directors need to consider risks and rewards from the implementation of generative AI in their organisations.

Forming a coherent AI policy is crucial, covering employee’ use of generative AI and potential changes to the organisation's value proposition.

The policy should demonstrate that the business is actively identifying emerging risks and opportunities in AI.

It is important for directors and management to explore AI's impact on operations and strategy to avoid being caught off guard.

Implementing a new AI governance committee or workstream may not be necessary, but having a simple discussion and creating a policy is beneficial.

Businesses should set aside time to consider how AI may change the value extracted from their business.

An AI policy should cover data privacy and security, transparency, education and workforce development, ethical and social concerns, and stakeholder considerations.

Developing a “responsible AI” strategy or framework is essential, considering both strategic benefits and risks. Careful consideration of data assets, information processes, and system integration is necessary to maximise AI's benefits.

Organisations should be aware of regulatory progress in foreign markets, such as the European Union's AI Act, which may have global implications.

The EU’s AI Act has moved one step closer to becoming law with the European Parliament approving it. The rules aim to ensure a human-centric and ethical development of AI. The rules follow a risk-based approach, prohibiting AI practices that pose an unacceptable level of risk to people's safety and include bans on intrusive and discriminatory uses of AI systems. High-risk areas have been expanded to include harm to health, safety, fundamental rights, the environment, and the influence of AI on political campaigns.

The European regulations on AI could influence global AI standards and practices and may influence New Zealand policymaking.

Businesses dealing with European markets or considering global expansion would need to comply with the European regulations to avoid penalties.