Governance news bites – 13 June
A collection of governance-related news you might have missed in the past two weeks.
The power of inclusion in training AI to avoid bias.
With 82% of New Zealand businesses now embracing AI (as highlighted by the AI Forum NZ), the potential for efficiency and innovation is accessible to all sectors.
AI's impact on women is complex, with both potential opportunities and risks. Responsibly embracing ethical AI in business is a good thing, yet while AI offers many opportunities, it can also exacerbate existing inequalities, particularly with regard to gender bias.
AI is a powerful tool with the potential to both address and amplify issues of gender bias. AI tools are being implemented to streamline hiring processes, improve healthcare diagnostics or replace traditional contact centres, along with many more uses. In these situations AI could perpetuate existing biases if not trained on diverse and inclusive datasets.
For women and gender minorities, this risk can impact them in many ways. AI systems that fail to account for gender specific symptoms in healthcare, reinforce stereotypes in hiring algorithms, or AI that struggles with diverse accents in voice recognition are just some examples where AI can deepen existing inequities.
So how can we address this? One way is to review how AI solutions are trained and developed.
AI solutions typically rely on two training methods: unsupervised learning and supervised learning.
Unsupervised learning analyses vast datasets (often publicly available datasets) without human intervention. While this is an efficient large language model (LLM) training method it also increases the risk of including unchecked biases present in the data.
In contrast, supervised learning includes human oversight, allowing for the identification and correction of biases. I like to call this "keeping a human in the loop". It is essential for aligning AI systems with ethical principles of fairness and equity by ensuring that AI training datasets are curated to reflect diverse perspectives.
Board members can play a pivotal role in fostering ethical AI practices within their organisations by advocating for transparency, championing diverse datasets, and insisting on human oversight during AI solution development.
In my recent IoD article I shared ways that boards can use existing tools such as the board risk appetite statement to enable innovation with AI. Setting clear ethical expectations on the way AI is trained and tested in the board risk appetite statement is one way of ensuring that solutions directors are accountable for don’t exacerbate gender bias.
Women account for less than 30% of the technology workforce in New Zealand, yet the underrepresentation of women in technology-related roles should not limit their participation in shaping AI systems. Inclusive AI training does not necessarily require the “human in the loop” to have a technology background. Individuals with domain expertise or lived experience (the very people who understand the nuances of the subject matter) are invaluable in shaping the training and testing of unbiased AI.
AI solutions present a future of immense opportunities, but AI is not immune to the inequities of the present. Women’s voices and experiences, along with those of other underrepresented groups are critical to ensuring AI systems reflect the diverse needs of society.
Inclusion of diverse perspectives in the teams reviewing and testing AI training data is a key step in reducing adverse impacts. Responsible AI development can be a powerful tool to promote gender equity, keeping a “human in the loop” when reviewing training data will go a long way to embedding fairness, diversity and inclusion.
As board members, you are uniquely positioned to drive positive change by championing diverse voices and fostering ethical AI practices within your organisations. By setting clear expectations in your Risk Appetite Statement and embracing the “human in the loop” approach to testing and training AI, including diverse domain expertise and lived experiences, you can make it more likely your organisations’ AI solutions are developed inclusively and equitably.
You can ensure they reflect the rich diversity of our society and serve all your consumers, regardless of their gender or background.
Kate Kolich has chaired Women in Data Science New Zealand for eight years chairing WiDS NZ fundraising to support scholarships for women studying science, technology engineering and maths (STEM) at university. More than 100 women have spoken at WiDS NZ events and all events are free (for all genders) to attend. Find out more here.
Kate Kolich MInstD has over 25 years of leadership experience in data, digital, and innovation across private and public sectors. She has won multiple industry awards for her work and was named one of the top 100 innovators in data and analytics by Corinium Global Intelligence in 2024. She is the chair of Women in Data Science New Zealand (WiDS NZ).