The Australian Government is now operating in line with Caretaker Conventions, pending the outcome of the 2025 federal election.

Make a submission: Published response

Raymond Sheh

Published name

Raymond Sheh

How can this list capture emerging uses of AI?

An illustrative list of examples and case studies is needed from a broad base of industry and users. This includes critical infrastructure use cases, such as the use of AI for routing emergency services, drones for first responders, power grid load balancing, and the detection of intrusion for physical and cyber security.

4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)?

Yes

Please provide any additional comments.

Yes, for critical applications where it becomes impossible for human oversight to be applied in real-time, particularly if there are insufficient resources to do so. This would serve to force those using such systems to include the cost of real human oversight into the decision to develop and deploy such systems.

5. Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?

Yes

Please provide any additional comments.

Yes, but the flexibility also needs to account for the change in societal definitions for AI and related concepts. For example, the meaning of "AI" in society has shifted dramatically over the last decade.

6. Should mandatory guardrails apply to all GPAI models?

No

Please provide any additional comments.

No, there should be some additional context to GPAI, as not all "multi purpose" GPAI can be used in high risk applications. The need for guardrails may still be based on application.

7. What are suitable indicators for defining GPAI models as high-risk?

Base on technical capability

What technical capability should it be based on?

Other

8. Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings?

No

Please provide any additional comments.

No, nothing completely mitigates risk. Rather, expectation management should be emphasised to ensure that stakeholders are clear as to the residual risk that they must manage.

9. How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve Indigenous Cultural and Intellectual Property?

The guardrails are broad enough that, as written, at an abstract level, they could already incorporate First Nations knowledge and cultural protocols. However, appropriate expertise, resourcing and wide consultation among the varied First Nations is necessary to ensure that the implementation of the guardrails is appropriate.

10. Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately?

No

Please provide any additional comments

No, there needs to be additional guardrails around how AI changes the way that people behave and perform tasks. For instance, it can lead to de-skilling or displacement of labour, which should be considered and managed.

12. Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?

Yes

Please provide any additional comments

Yes, providing resources and group certification at a government level so that small end-users who do not have the resources can make use of certified models and systems. Grants and funding, including linkage grants for university involvement and transition of technologies, can be streamlined. An extensive bank of detailed use cases and examples, along with the ability and resources, will support and provide certainty to small and medium sized organisations.

13. Which legislative option do you feel will best address the use of AI in high-risk settings?

A whole of economy approach – introducing a new cross-economy AI Act

What opportunities should the Government take into account in considering each approach? 

A whole of economy approach is preferred although what is actually possible is likely to be a hybrid of the three approaches.

15. Which regulatory option(s) will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology?

A whole of economy approach – introducing a new cross-economy AI Act

Please provide any additional comments.

A whole of economy approach is preferred although what is actually possible is likely to be a hybrid of the three approaches.