Make a submission: Published response

#479
The University of Western Australia Tech & Policy Lab
15 Sep 2023

Published name

The University of Western Australia Tech & Policy Lab

Upload 1

Automated Transcription

This submission responds to the Department of Industry, Science and Resources’ Discussion
Paper, Safe and Responsible AI in Australia, published in June 2023. The Discussion Paper seeks feedback on possible governance and regulatory responses to ensure “AI is used safely and responsibly”. Drawing on our expertise as interdisciplinary researchers with track records in the development, deployment, and regulation of artificial intelligence (AI) and automated decision- making systems, we have focused on how the Australian Government can proactively manage AI to best ensure that it serves the needs of all Australians.

1. AI’s potential justifies a pro-innovation approach that is not about being rapid or reckless, but
about prioritising societally beneficial uses of AI that ensure public trust and confidence
through trustworthy behaviour. A key element of trustworthy behaviour is evidence-based and
contextual assessment of whether or not, within a suite of potential approaches, AI offers a safe
and responsible approach to problem-solving.

2. We recommend the creation of an AI oversight body to ensure the development and
deployment of safe and responsible AI in Australia. This governance mechanism would foster
trustworthiness; the essential precursor to increasing public trust.

a. The engagement of external experts across both the technical and societal aspects of
AI is necessary to ensure independent scrutiny and meaningful civic accountability.

b. Reporting and auditing requirements and investigative and enforcement powers are
necessary to provide a robust evidence base and secure safe and responsible conduct.

3. We recommend the Government investigate proactive forms of governance, such as licensing
systems, to demonstrate that AI systems comply with certain standards or have certain
measures in place to reduce risks.

4. The uncertainties of AI necessitate a regulatory response that is iterative and responsive to
new evidence. A risk-based approach can fulfil this need, but the evidence used to gauge risk
must be rigorous, objectively assessed according to known standards, and cognisant of the
inability to have a complete understanding of the full risk profile of a given AI system.

The UWA Tech & Policy Lab is an interdisciplinary research centre focused on civic accountability in the tech ecosystem. Based at UWA Law School under the leadership of Associate Professor Julia
Powles and Professor Jacqueline Alderson, the Lab has expertise in technology law and governance, biomechanics and bioengineering, data analytics and machine learning, and augmented/virtual/extended reality technologies. This submission was led by Dr Hannah Smith.
Given the distinctive opportunity presented by Australia’s commitment to safe and responsible AI and the need for a clear and knowable regulatory framework, the essential starting point for the
Government is to define what is meant by ‘safe AI’, ‘responsible AI’, and the conjunction ‘safe and responsible AI’. Who has responsibilities, for what, and to whom? What does safety require, for what, by who, and to whom? Without answers to these questions, safe and responsible AI is a slogan, not a regulatory target.1

It is important in any definitions to retain nuance. That a particular technology may pose a risk to human life does not negate the safety considerations of more minor malfunctions. Equally, safe and responsible AI must address the human and environmental considerations associated with the development and deployment of AI technologies, from the labour conditions of workers engaged in data labelling, filtering, and moderating, to the costly ecological impacts of energy and water demands, to the atrophying of different knowledge systems based on the prioritisation of the logic of optimisation. The potential difficulty of definition does not detract from its necessity.
Defining safe and responsible AI provides the Australian Government with an opportunity to lead global discussion on the circumstances of introducing and maintaining AI in society, and how to delineate between what is acceptable and unacceptable.

OVERLOOKED RISKS AND POTENTIAL MITIGATIONS

Despite a plethora of activity, such as that of the Digital Transformation Agency, AI Ethics
Framework, and National AI Centre, it is unclear how existing Government initiatives promote safe and responsible AI in practice, including within Government itself.

For true system-wide feedback on a coordinated and coherent response to AI, several essential concerns must be addressed that are currently missing from the Discussion Paper. Chief among these are concerns regarding labour conditions, national security, and intellectual property. We find it difficult to envisage any AI being designated as ‘safe and responsible’ if it neglects the often- appalling working conditions of those vital to the training and monitoring of AI systems,2 or the risks to national security and IP that attend the drive for vast and centralised data repositories.

The Discussion Paper focuses substantially on the risks of bias and how it can arise during AI development and deployment. Bias is a critical concern, and we recognise that the indelibly human inputs to AI mean it will never be devoid of biases rooted in our hopes, fears, uncertainties, and ignorance. Nevertheless, we recommend the Government adopt a more comprehensive understanding of the different types of AI risk in order to better understand their implications and respond appropriately. An excellent illustration is the risk matrix by Maham and Küspert3, which presents nine relevant risks arising from AI development, deployment, and from subsequent uses

1 See further, J. Powles, ‘What Does it Take to Be a Leader in ‘Responsible AI’', CSIRO Machine Learning and
Artificial Intelligence Future Science Platform Annual Conference (MARS 2023), Brisbane, 6 June 2023.
2
See A Williams, M Micelli, and T Gebru, ‘The Exploited Labor Behind Artificial Intelligence’ (Noema, 13
October 2022) available at < https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/>
3
P Maham and S Küspert, Governing General Purpose AI: A Comprehensive Map of Unreliability, Misuse and
Systemic Risks (July 2023) available at

8

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.