Frontier Technology24 Mar 2022

New open-source software and report to help companies ‘de-risk’ the growing threat of artificial intelligence

With the support of Minderoo Foundation, a new report and tool have been launched to help organisations manage the risks of AI.

05_Maths_04
Photo Credit: Gradient Institute.

Companies using Artificial Intelligence (AI) will soon be able to access open-source software that helps improve their control of the impact caused by the decisions of their AI systems.

Developed by Gradient Institute, with support from Minderoo Foundation’s Frontier Technology initiative, the AI Impact Control Panel elicits the goals and preferences of decision-makers through a graphical user interface and translates them into the mathematical language required by an AI system.

The users of the tool do not need to have technical knowledge about AI. Rather they can set the objectives and constraints for the AI system.

The control panel helps ensure the AI system’s operation is in alignment with the values of the organisation and society by iteratively asking the people accountable for the AI system what the acceptable ranges of different measures of performance are (compared to known baselines), the relative importance of different objectives and the relative desirability of different outcomes. The tool adapts the choices presented to users over time to efficiently discover their preferences without overwhelming them.

Gradient Institute CEO, Bill Simpson-Young, said along with the AI Impact Control Panel, Gradient Institute, supported by Minderoo Foundation, is today publishing a report with high-level guidance for organisations on how to reduce the risks associated with using AI systems for decision-making purposes.

Mr Simpson-Young said both the control panel and report De-Risking Automated Decisions: Practical Guidance for AI Governance can play an important role in helping organisations understand and manage the serious risks of AI, which many governance structures are yet to adapt to.

“AI views the world only via the data given to it and obeys the letter, not the intent, of instructions. It has no minimum moral constraints, no common sense, no understanding of context and yet can make millions of decisions every second,” he said.

“There are a growing number of cases where organisations, having deployed AI systems for making decisions, caused serious harm with material consequences to people’s lives,” Mr Simpson-Young said.

Emma McDonald, senior policy adviser with Minderoo Foundation’s Frontier Technology initiative, said decision-makers need the ability to quickly adjust parameters to avoid unintended consequences, particularly when AI is making decisions that can affect lives and livelihoods.

“Banks run AI systems to decide who gets a loan, governments use AI systems to police citizens, job agencies use AI to choose who should be shortlisted for a role and social media uses AI systems to filter and highlight the politics or public health messages its users see,” Ms McDonald said.

“Globally we have seen disastrous unintended consequences from AI: recruiting tools showing bias against women, newsfeeds pushing misinformation and hate, healthcare algorithms that wrongly cut off users from pain medication, and the under-resourcing of underprivileged neighbourhoods,” Ms McDonald said.

A key component of Gradient Institute’s report is a collection of real-world case studies illustrating the many ways in which risks can arise when using AI to make decisions.

It also explains why failing to recognise the differences in how people and machines process information can lead to control and monitoring gaps when assigning decision-making to an AI system.

The report suggests several interventions to alleviate the risks, across the three broad fronts of people and culture, routines and processes, and technical practices and tools.

AI expert Dr Catriona Wallace, founder of Ethical AI Advisory, which is now part of Gradient Institute, said organisations must prepare for AI regulation by evolving their risk and governance practices.

“Increasing concern about the capacity to create harm through the use of AI has ignited a range of worldwide policy responses, most noticeably the European Union’s proposal for an ‘AI Act’, and the Australian Human Rights Commission calling for AI regulation in a recent report,” Dr Wallace said.

“This situation creates an imperative for organisations to understand the novel risks posed by using AI systems for decision-making purposes and develop appropriate responses to deal with those risks ahead of upcoming regulations,” she said.

Minderoo Foundation
by Minderoo Foundation

Established by Andrew and Nicola Forrest in 2001, we are a modern philanthropic organisation seeking to break down barriers, innovate and drive positive, lasting change. Minderoo Foundation is proudly Australian, with eight key initiatives spanning from ocean research and ending slavery, to collaboration in cancer and community projects.

3 minute read
Share this article
Other Stories