A test of intellectual curiosity

type
Article
author
By Robert Weaver, FinPro Risk Management, New Zealand
date
15 Dec 2023
read time
4 min to read
A test of intellectual curiosity

Since generative artificial intelligence (GenAI) burst onto the scene with Open AI’s ChatGPT late in 2022, industry attitudes towards the technology continue to shift. Internationally, commentators have been extolling its user friendliness with a sense of awe, but also debating the ethics of its use.

It is bigger than one industry disruptor and is potentially the dawn of a new era with continuing advancements at its core, creating and discovering, summarising, and automating what we do. We know potential applications for AI are vast and can extend to almost any industry or sectors that rely on decision-making and problem-solving such as:

  • Healthcare: Involvement in diagnosing complex medical conditions, analysing patient data, and recommending personalised treatment plans.
  • Finance: Efficiencies across financial analysis, risk assessment, investment strategies together with improving client interaction.
  • Manufacturing: Optimising production processes, synthesising purchase orders, predicting maintenance needs, and improve overall efficiency.
  • Transportation: Autonomous vehicles advancements, optimising traffic flow, and improvements in logistics and supply chain management.
  • Environment: Optimising resource usage and assisting private and public decision-makers through enhanced tools and analytics.
  • Education: Adaptive tutoring and the provision of educational content.
  • Research and development: Scientific discoveries and enhanced data analysis.
  • Retail and sales: transforming shopping experiences, client interaction and the elimination of manual tasks such as cross references.


Whether businesses are at the forefront in developing GenAI or are operating its technology, careful consideration around the ethical, legal and societal implications will be essential. And critical to its successful use will be in its implementation within the business community and the role for good governance, for which there is no substitute.

Traditional AI has already influenced ethical issues and certain risks surrounding data privacy, security, policies and workforces. GenAI is likely to challenge business risks in areas such as misinformation, plagiarism, copyright infringements and harmful content.

International litigation in the United States and United Kingdom has typically centred on companies who develop AI, and involve allegations of infringement of intellectual property rights, violation of privacy or property rights, or consumer protection laws.

In the Global Risks Perception Survey that underpins the Global Risks Report 2023, more than four in five respondents anticipated consistent volatility over the next two years.

Considering technological advancements including AI, Marsh has stated:  “Sophisticated analysis of larger data sets will enable the misuse of personal information through legitimate legal mechanisms, weakening individual digital sovereignty and the right to privacy, even in well-regulated, democratic regimes.”

Potential scenarios for directors and officers liability include:  

  • Continuous disclosure: For publicly listed companies, those making statements or representations on the use of AI and its potential benefits are subject to continuous disclosure requirements. Where there remains a degree of uncertainty on the technology and its use and benefits, this may heighten the risk of securities class action litigation from investors who believe they have been misled or not fully informed of the risk associated.
  •  Fiduciary duties: For both private and publicly listed companies, there is potential for allegations of inadequate oversight by the board of the company’s use of AI. Scenarios could involve overreliance and a lack of human oversight, or allegations of overreliance in a business transaction such as a merger or acquisition.
  • Regulatory: While we rely on the existing legal framework, the focus and attention on the topic of AI will pose significant regulatory concerns, particularly when it comes to workplace issues (including resource hiring and supervision).
  • Management liability/employment practices liability: As a potential disruptor to a business’ labour force, the use of AI heightens the potential for breaches of employment law. A further scenario could include allegations of bias.

Boards will have to maintain their intellectual curiosity, developing an understanding of those generative AI components and any model’s potential risk of use in their businesses. Risk mapping and the development of a company’s risk posture around GenAI technology will help provide a framework for decision-making.

The approach to integrating AI into a business will continually challenge those decisions on capex and investments, and may pivot on a board’s understanding of broader risks and opportunities in deciding on the balance between a strategic step-by-step process, or a complete overhaul and replacement of existing technology. Both during and upon integration, subsequent considerations will need to extend to those necessary human oversights and inputs that will be required to minimise errors and misinterpretations in its operational use and interactions.

In New Zealand, the legal risks of GenAI’s emergence will be challenged just like they are in other jurisdictions. Due to the societal and ethical challenges presented, governments will need to work closely with the courts. There are no AI-specific laws in place with it being covered under current New Zealand legislation, such as the Privacy Act, the Human Rights Act, the Fair Trading Act, Patents Acts, and the Harmful Digital Communications Act.

However, it is reasonable that legislators and regulators may look abroad to other jurisdictions for insights, especially any fast evolving or recent trends. On 30 October 2023, the White House released an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This order highlights the significant regulatory challenges and evolving regulatory landscape in the US.

Fascinatingly, within a few weeks of this announcement a co-founder of OpenAI was fired and rehired within a few days. This illustrates the influence that certain individuals can have on their own businesses, the interest of all shareholders, and the influence on political leaders and an international community that is seeking to understand such technology and the setting of laws and regulations in our future society.

Claims overseas are already trending toward privacy, unfair competition, copyright, trademark, libel and facial recognition cases, with plaintiffs mainly focusing on the developers of AI technology. As litigation in this area continues to develop, the insurance industry has a huge opportunity to also adapt alongside. There are products already on the market that are specific to AI developers and users of the technology.

In establishing the right balance for the use of AI, stakeholders need to check AI’s outputs as they would their own and adapt:

  • For businesses in New Zealand, there is no substitute for good governance.
  • The interconnectivity of current and future risk will present both challenges and solutions.
  • Data protection and intellectual property are current areas of concern.
  • Existing AI laws and regulation are nascent and evolving.
  • Generative AI is technology built by humans and prone to human error.

For more in this space, see the webcast The OpenAI Saga – governance hallucination?