We extended the closing date for this consultation to 6 December 2024. The original closing date was 29 November 2024.
Overview
The Australian Government is considering options to mandate mandatory guardrails on the development and deployment of high-risk AI in Australia. The guardrails will focus on testing, transparency and accountability requirements.
We recently completed a consultation on these proposed mandatory guardrails, seeking broad views from across the economy. In this survey, we want to better understand the regulatory impact these proposed regulations will have on Australian businesses.
This survey will help us collect data to understand your business and identify the costs and benefits of regulatory change. Aimed at developers and deployers of AI, the survey has 5 sections. It should take 15-30 minutes of your time.
Topics include:
the type of AI activities your business engages in
revenue and expenditure from those AI activities
the proposed regulatory impact on these activities
some broad demographic questions.
You may wish to consult with other areas in the business, like your finance team, to answer these questions.
We have asked industry bodies to send this survey to their members. You have received this survey, because your industry organisation has identified you as someone who the proposed mandatory guardrails may impact.
Taking part is voluntary. The more companies that respond, the more representative and useful the results will be.
Important background information
The proposed mandatory guardrails are preventative measures that would require developers and deployers of high-risk AI to take specific steps across the AI lifecycle. Approaches outlined in the Proposals Paper would bring Australia closer into line with the European Union, and proposed approaches in Canada and the United Kingdom, who join Australia as signatories to the multilateral Bletchley and Seoul Declarations.
For questions about the impact of the guardrails, we will ask you to consider 2 possible scenarios where the guardrails may apply to AI that you develop and deploy:
all high-risk narrow AI
all high-risk narrow AI and all General-Purpose AI.
Definitions for AI models and systems in the proposals paper are available on the key definitions page.
Consultation document
Proposals Paper for introducing mandatory guardrails for AI in high-risk settings [2.8MB PDF] [1.2MB DOCX]