Member profile - Adding new governance and management roles is currently unavailable. We are working to resolve this issue. 

IMHO: A matter of trust

type
Article
author
By Frith Tweedie, Principal Simply Privacy
date
2 May 2023
read time
3 min to read
blue white sticks

OPINION: It’s a cliché that trust is hard won and easily lost. With the 2023 Edelman Trust Barometer revealing business is now viewed as the only global institution that is both competent and ethical, the stakes have never been higher for organisations to ensure they maintain that trust in the midst of turbo-charged technological change – particularly when it comes to artificial intelligence (AI).

ChatGPT – the AI chatbot that exploded into public consciousness late last year – has been called AI’s “iPhone moment”. Many believe it signals a critical inflection point for humanity, with Microsoft’s Vice Chair and President Brad Smith, saying: “AI represents the most consequential technology advance of our lifetime . . . Like no technology before it, these AI advances augment humanity’s ability to think, reason, learn and express ourselves. In effect, the industrial revolution is now coming to knowledge work. And knowledge work is fundamental to everything.”

ChatGPT and AI

AI is an umbrella term covering machine learning, computer vision, natural language processing (see ChatGPT) and robotics. A simple definition is “software that learns by example” (i.e. data).

ChatGPT is underpinned by a powerful large language model trained on vast amounts of data. It can generate incredibly realistic, human-like text. Other forms of generative AI, like Stable Diffusion and Midjourney, are able to create sophisticated imagery from basic text instructions.

Microsoft invested US$10b in OpenAI, the creator of ChatGPT, and has already integrated it into its Bing search engine. It will also be incorporated into Word, Powerpoint, Excel and other business applications. Office work as we know it will never be the same.

What are the implications?

AI tools present massive opportunities from a business perspective – think far greater speed and productivity and sophisticated automated decision-making powers to name a few. Savvy deployment of AI can improve the way an organisation operates, deliver improved organisational financial performance and enhance shareholder value.

But AI also brings a number of now well-established risks, including privacy and security concerns, bias and discrimination, unreliable algorithms, job displacement, over-reliance by humans, risks from procuring third-party AI tools and a lack of transparency, and accountability.

ChatGPT builds on those risks in several new and alarming ways, including the potential for widespread fraud and misinformation, intellectual property violations and factual errors (known as “hallucinations”) that can have devastating effects.

These issues create significant enterprise risks, including compliance failure, liability, reputation damage and negative financial performance.

AI regulation is coming

While existing privacy, human rights and discrimination laws all apply to AI, there are growing demands to address specific risks.

The EU’s “AI Act” – expected to come into force next year - sets out obligations for the development and use of AI systems. Like the EU’s General Data Protection Regulations (GDPR), the AI Act will have extra-territorial effect and harsh sanctions – those found in breach will face fines of up to 30 million euros or 6% of global annual turnover.

The UK, Canada and the US have all announced plans to regulate AI. China recently announced draft AI regulations to encourage the adoption of safe and trusted AI. The European Parliament has called for international collaboration and political action by world leaders to identify methods of controlling “very powerful” forms of AI.

In New Zealand, the Office of the Privacy Commissioner is currently exploring how best to regulate the use of biometrics, including facial recognition.

In short, the days of unregulated “wild west” AI will soon be over.

Responsible AI

Responsible AI is a governance framework that guides and documents how an organisation can maximise the benefits of AI while minimising potential risks. It involves being transparent about when and how products leverage AI, how algorithms influence business decisions and the steps being taking to mitigate bias, privacy violations and other risks.

Organisations that navigate these challenges successfully can win the trust of customers and other stakeholders – irrespective of whether a business is subject to AI legislation.

Every organisation is different, so each Responsible AI framework needs to be tailored to meet its specific business – and regulatory – needs. But a typical programme involves the following.

  1. Tone from the top: Boards and senior management need to drive responsible AI engagement and support to embed the right approaches in corporate culture.
  2. An AI strategy: This should set out how the organisation will use AI, why and what the benefits and risks might be.
  3. Responsible AI governance: Clearly defined roles, responsibilities and risk tolerances, as well as appropriate risk management structures, policies and processes are critical. Keep an eye out for the AI governance masterclasses that the AI Forum NZ will soon be launching.
  4. Guiding ethical principles: Develop a set of Ethical AI principles tailored to your organisation to guide your approach.
  5. Operationalisation of those principles: Algorithmic Impact Assessments are powerful tools to identify AI risks and controls.
  6. Good data governance and privacy practices: Make sure these critical foundations are in place or reap the consequences.

Responsible AI is a key tool for ensuring AI systems are designed and deployed lawfully, ethically and safely. This, in turn, fosters trust, risk mitigation and regulatory compliance, positioning your organisation for success in an increasingly AI-driven world. 


About the author

frith tweedie

Frith Tweedie is a principal at Simply Privacy focused on privacy and AI. She has served on the executive committee of the AI Forum since 2019, was part of the governance group for the New Zealand Algorithm Hub and is currently on the advisory panel for the International Association of Privacy Professionals. She will be part of an IoD panel discussion on ChatGPT on 30 May.

The views expressed in this article do not reflect the position of the IoD unless explicitly stated.

Contribute your perspectives and expertise on an area of governance to the IoD membership and governance community. Contact us mail@iod.org.nz