Make a submission: Published response
Published name
1. Do the proposed principles adequately capture high-risk AI?
Please provide any additional comments.
The principles-based approach to defining high-risk AI set out in the Proposals Paper is not supported for the following reasons:
1. It is not appropriate for organisations to be given responsibility for determining whether their AI system is high-risk (and it is not clear from the Proposals Paper whether it is the developer, the deployer or both who bear this responsibility). The assessment of whether an AI system is high-risk determines whether that system is subject to the mandatory guardrails which may have significant resource and cost implications. It is therefore the most critical decision that will be made by an organisation developing or deploying an AI system. Experience from other jurisdictions indicates that when too much discretion is given to organisations to make such an assessment, they are likely to err against finding that their AI system requires compliance with AI guardrails (see Lara Groves et al, 'Auditing Work: Exploring the New York City Algorithmic Bias Audit Regime (Conference Paper, Conference on Fairness, Accountability and Transparency, June 3-6 2024)). A determination of this importance must be made by the government as it has democratic accountability.
2. Australia lacks the well-developed legal framework needed to make a principles-based approach meaningful or effective in practice. Federal anti-discrimination laws are not fit-for-purpose and need reform: see the Australian Human Rights Commission's ('AHRC') 'Free and Equal: A Reform Agenda for Federal Discrimination Laws' report (2021) and the recommendations of the Royal Commission into Violence, Abuse, Neglect and Exploitation of People with Disability (2023). Australia does not have modern and comprehensive privacy and data protection laws or national human rights protection.
3. The proposed principles are too vague, and organisations are ill-equipped to apply them in practice. For example, principle a. requires organisations to have regard to an individual's rights recognised in 'Australian human rights law' and 'Australia's international human rights law obligations'. Principle d. requires organisations to assess the risk of adverse impacts to the broader economy, society and the environment. Both of these principles require organisations to consider and weigh complex and difficult legal and ethical issues and organisations are unlikely to have the expertise to do so with any rigour. Further, the principles provide little guidance as to relevant thresholds. When is a risk an 'adverse' one? When will the 'severity and extent of the adverse impacts' be high enough to warrant a system's classification as high-risk?
4. The principles do not provide for transparency regarding the methodology used by organisations to conduct high-risk assessments and there is no indication in the Proposals Paper that there will be regulatory oversight of this risk assessment process. If a principles-based approach is adopted, at a minimum, there should be mechanisms for an independent regulator to review, and individuals to challenge, an assessment by an organisation that their AI model/system is not high-risk.
A list-based definition is preferred as:
1. It is appropriate for the government, a democratic institution, and not organisations conflicted by profit motives, to decide which AI systems are high-risk. It is best placed to make assessments of the human rights, economic, societal and environmental impacts of particular applications of AI.
2. This approach enables interoperability with regulation in other jurisdictions including Canada and the European Union ('EU'), and the ability to learn from those jurisdictions
3. It is transparent
4. A list provides greater clarity and certainty for organisations and is therefore more easily applied in practice and at less cost.
5. Mechanisms can be adopted to ensure that low-risk AI uses are not inadvertently captured.
3. Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed?
If you prefer a list-based approach (similar to the EU and Canada), what use cases should we include? How can this list capture emerging uses of AI?
How can this list capture emerging uses of AI?
This should not be a static list. Instead, there should be mechanisms for: (i) an annual review of this list; and (ii) for the list to be amended and updated as necessary.
4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)?
If so, how should we define these?
The following high-risk use cases should be banned:
1. AI systems which utilise facial analysis techniques. These techniques rely on machine learning to identify and infer human characteristics, emotions and/or behaviours from an individual’s facial features and movements. They lack scientific reliability and validity (see, eg, Lisa Barrett et al, ‘Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements’ (2019) 20(1) Psychology Science Public Interest 1) and lead to discrimination. The EU AI Act bans emotion recognition in the area of 'workplace and education institutions' (article 5(1)(f)). However, it is submitted that all AI systems which employ facial analysis should be banned including those used in immigration and the criminal justice system.
2. Biometric categorisation systems which use protected attributes under anti-discrimination laws.
3. 'Real time’ remote biometric identification systems in commercial settings and publicly accessible spaces. Biometric identification has been proven to have low accuracy rates and high potential to infringe human rights.
4. Predictive policing systems (based on profiling, location or past criminal behaviour) including facial recognition systems as they have low accuracy rates and high potential to infringe human rights.
5. Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?
Please provide any additional comments.
A list-based approach is preferred.
6. Should mandatory guardrails apply to all GPAI models?
Please provide any additional comments.
These guardrails need to apply to developers of GPAI models, GPAI systems and deployers.
8. Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings?
Please provide any additional comments.
1. This question raises central issues in a risk-based approach to the regulation of AI: how do we determine if an identified ‘risk’ is 'appropriately mitigated' and who makes these decisions? There are many gaps and uncertainties in the guardrails. For example, for Guardrail 4, what is the ‘objective and measurable performance metric’ for discrimination by an AI system? What are the relevant ‘test/s’ which must be conducted to evaluate whether an AI system has discriminatory impacts? The Proposals Paper indicates that standards will support the implementation of this guardrail. However, determining thresholds for discrimination by AI systems and how to ‘appropriately’ mitigate any discriminatory impact are normative and value-laden decisions and it is not appropriate for them to be made by standards bodies. These bodies lack democratic legitimacy and the expertise to make decisions about competing rights and interests and they are usually dominated by industry organisations to the exclusion of civil society and experiential experts. Decisions about these and other relevant issues must be made by government and relevant regulators, with meaningful engagement and input from civil society and experiential experts. There is much work to do done in this area to make the guardrails effective.
2. The proposed mandatory guardrails need to clearly and separately delineate the responsibilities of developers and deployers across the high-risk AI supply chain.
3. A holistic legal framework is required for the mandatory guardrails to be effective in mitigating the risks of AI used in high-risk settings. As stated above in this submission, this requires, at a minimum, anti-discrimination laws which are fit-for-purpose, modern and robust privacy and data protection laws and national human rights protection.
Are there any guardrails that we should add or remove?
10. Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately?
Which of the guardrails should be amended?
Please provide any additional comments
It is submitted that the existing guardrails in the Proposals Paper should be amended as follows:
Guardrail 2 - this is a key guardrail. It should be strengthened by: (i) making an AI Impact Assessment mandatory before a high-risk system is placed on the Australian market and at regular intervals thereafter. Such an assessment should use human rights as a framework to identify AI harms; and (ii) creating a mandatory obligation to consult with impacted individuals, groups and communities. For example, workers and unions must be consulted before any AI system is deployed in their workplace.
Guardrail 10 - a conformity assessment regime must be overseen by an independent and resourced regulator empowered to: (i) establish a process for the accreditation of third party assessors. This guardrail will not be meaningful unless conformity assessments are carried out by independent third parties. This is particularly essential for developers of GPAI; (ii) investigate and determine individual complaints of non-compliance with the guardrails; and (iii) conduct 'own motion' investigations and audits regarding compliance by organisations with the guardrails.
As drafted, the current guardrails do not create sufficient transparency obligations for high-risk AI systems on the market in Australia. Transparency is required to give agency and autonomy to individuals as to how and when to interact with an AI system and is a pre-requisite to meaningful contestability of AI decisions and outputs. The guardrails should therefore be amended as follows:
Guardrail 4 - should create a requirement that a 'test' (or audit) of the performance of AI systems across standardised metrics be made publicly available.
Guardrail 6 - does not include any right to an explanation. The Government Response to the Privacy Act Review Report (2023) made it clear that a 'right to request meaningful information about how automated decisions with legal or similarly significant effect are made' is supported. This must be reflected in this guardrail.
Guardrail 9 - this guardrail should be strengthened by: (i) requiring that all high-risk AI systems be registered in a public database; and (ii) creating a notifiable adverse incident reporting requirement for all high-risk AI systems (not just GPAI models). This could be similar to the notifiable data breach regime in the Privacy Act 1988 (Cth).
12. Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?
Please provide any additional comments
The mandatory guardrails are needed to protect against the potential of high-risk AI systems to cause harm at speed and scale and impact fundamental rights, the economy, the environment and society. There should, therefore, be no 'watering down' of these obligations for SMEs. Instead, SMEs can manage the obligations created by the mandatory guardrails through their contractual arrangements with AI developers. For example, an SME deployer could require an AI developer to fund any conformity assessment required prior to deployment by them. There should also be mechanisms which enable SMEs, where appropriate, to rely on an AI developer's compliance with the guardrails.
13. Which legislative option do you feel will best address the use of AI in high-risk settings?
What opportunities should the Government take into account in considering each approach?
The third option has the benefit of establishing a monitoring and enforcement regime overseen by an independent regulator with expertise in AI systems.
15. Which regulatory option(s) will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology?
16. Where do you see the greatest risks of gaps and inconsistencies with Australia’s existing laws for the development and deployment of AI?
1. Anti-discrimination law: the Proposals Paper recognises that discrimination is one of the primary risks of AI systems and that the Australian community has 'consistent concerns' regarding it. However, no evaluation has yet been undertaken by government or the AHRC of whether federal and state anti-discrimination laws adequately protect against AI harms and how legal liability might be apportioned between developers and deployers. Further, no guidance has been provided as to how existing laws apply to AI systems and automated decision making. These are significant omissions and ones which the Australian Discrimination Law Experts Group argued in its submission in response to the 'Safe and Responsible AI in Australia: Discussion Paper' need to be urgently addressed.
2. Any of the three legislative options will be ineffective unless there is reform of the Privacy Act 1988 (Cth) and the implementation of all of the reforms set out in Privacy Act Review Report (2023).
3. Remedies: we must ensure that there exist appropriate remedies across all existing Australian laws. In some cases, the individual remedy of compensation will be appropriate but, in others, systemic and collective remedies will be necessary. When the harm is at a group-level or to society, new remedies must be developed including those requiring the mandatory redesign of a system, temporary bans and mandatory external audits.
Which regulatory option best addresses this?
Please explain why.
The regulatory approach should be:
1. Close the gaps and uncertainties in existing laws. For Australian anti-discrimination law, this requires: (i) an evaluation of whether federal and state anti-discrimination laws adequately protect against AI harms and how legal liability might be apportioned between developers and deployers; and (ii) enactment of the reforms set out in the AHRC's 'Free and Equal: A Reform Agenda for Federal Discrimination Laws' report (2021) and the recommendations of the Royal Commission into Violence, Abuse, Neglect and Exploitation of People with Disability regarding disability discrimination laws (2023).
2. Provide guidance and enforcement of existing laws. For example, we urgently need the AHRC to provide regulatory guidance regarding how anti-discrimination laws apply to AI systems and automated decision-making.
3. Enact new cross-economy AI-specific laws overseen by an independent regulator with the necessary resources, capabilities and capacity to undertake this role.
Make a general comment
A research and evidence-based approach should be taken to all legislative and regulatory reforms.