Overview

The potential for artificial intelligence (AI) to improve social and economic well-being is immense. AI development and deployment is accelerating and is already permeating institutions, infrastructure, products and services. This often occurs undetected by those engaging with it.

The Australian Government’s consultations on safe and responsible AI have shown that our current regulatory system is not fit for purpose to respond to the distinct risks that AI poses.

Internationally, governments are introducing new regulations to address the risks of AI, with a focus on creating preventative, risk-based guardrails that apply across the AI supply chain and throughout the AI lifecycle. To unlock innovative uses of AI we need a modern and effective regulatory system.

In the Australian Government’s interim response to the Safe and Responsible AI in Australia discussion paper, we committed to developing a regulatory environment that builds community trust and promotes AI adoption.

We want your views on:

  • the proposed guardrails

  • how we’re proposing to define high-risk AI

  • regulatory options for mandating the guardrails.

These approaches are in our proposals paper, Introducing mandatory guardrails for AI in high-risk settings.

Proposed guardrails

The guardrails in this paper set clear expectations from the Australian Government on how to use AI safely and responsibly when developing and deploying AI in Australia in high risk settings. They aim to:

  • address risks and harms from AI

  • build public trust

  • provide businesses with greater regulatory certainty.

We will use your feedback to inform thinking across government on next steps, including how to best apply the proposed guardrails.

Proposals paper

Proposals Paper for introducing mandatory guardrails for AI in high-risk settings [2.8MB PDF] [1.2MB DOCX]

Timeline

  • Opened
    closed

    5 September 2024

  • Closed
    closed

    4 October 2024

  • Feedback published
    current