Make a submission: Published response
Published name
How can this list capture emerging uses of AI?
An illustrative list of examples and case studies is needed from a broad base of industry and users. This includes critical infrastructure use cases, such as the use of AI for routing emergency services, drones for first responders, power grid load balancing, and the detection of intrusion for physical and cyber security.
4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)?
Please provide any additional comments.
5. Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?
Please provide any additional comments.
Yes, but the flexibility also needs to account for the change in societal definitions for AI and related concepts. For example, the meaning of "AI" in society has shifted dramatically over the last decade.
6. Should mandatory guardrails apply to all GPAI models?
Please provide any additional comments.
No, there should be some additional context to GPAI, as not all "multi purpose" GPAI can be used in high risk applications. The need for guardrails may still be based on application.
7. What are suitable indicators for defining GPAI models as high-risk?
What technical capability should it be based on?
8. Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings?
Please provide any additional comments.
No, nothing completely mitigates risk. Rather, expectation management should be emphasised to ensure that stakeholders are clear as to the residual risk that they must manage.
9. How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve Indigenous Cultural and Intellectual Property?
The guardrails are broad enough that, as written, at an abstract level, they could already incorporate First Nations knowledge and cultural protocols. However, appropriate expertise, resourcing and wide consultation among the varied First Nations is necessary to ensure that the implementation of the guardrails is appropriate.
10. Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately?
Please provide any additional comments
No, there needs to be additional guardrails around how AI changes the way that people behave and perform tasks. For instance, it can lead to de-skilling or displacement of labour, which should be considered and managed.
12. Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?
Please provide any additional comments
Yes, providing resources and group certification at a government level so that small end-users who do not have the resources can make use of certified models and systems. Grants and funding, including linkage grants for university involvement and transition of technologies, can be streamlined. An extensive bank of detailed use cases and examples, along with the ability and resources, will support and provide certainty to small and medium sized organisations.
13. Which legislative option do you feel will best address the use of AI in high-risk settings?
What opportunities should the Government take into account in considering each approach?
A whole of economy approach is preferred although what is actually possible is likely to be a hybrid of the three approaches.
15. Which regulatory option(s) will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology?
Please provide any additional comments.
A whole of economy approach is preferred although what is actually possible is likely to be a hybrid of the three approaches.