Make a submission: Published response
Published name
1. Do the proposed principles adequately capture high-risk AI?
Please provide any additional comments.
It will be important to provide clear examples to help guide the application of these principles, including limitations on scope so that the burden of effort in conducting a risk assessment falls upon those with the most responsibility for the development / application of the AI model / system and the appropriate expertise to conduct a risk assessment.
Are there any principles we should add or remove?
Please identify any:
Domains such as defence, national security or policing should not be excluded from the mandatory guardrails. Where usage of AI in these domains is considered to be high risk, good governance of that usage and a high level of confidence about the quality of the AI system in use, and the quality of the data is equally if not more important than other domains. Requirements for transparency in AI usage in these domains is important for the maintenance of public trust in the relevant organisations, and can be managed in such a way as to not compromise the effectiveness of the processes for which an AI system is being used in those domains, similarly to the way in which oversight and accountability functions are applied in the use of surveillance devices or other such technology.
2. Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country?
Please provide any additional comments.
While the concept of Indigenous Cultural Intellectual Property (ICIP) is still in the process of being integrated into the Australian legal framework, there is a significant risk to First Nations people who may find that existing frameworks that protect intellectual property are insufficient to address risks of misuse of ICIP within AI systems. Specific guidance on these issues should be provided to support risk assessments relating to Principle (d), and such guidance should arise from informed consultation with First Nations people.
3. Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed?
If you prefer a principles-based approach, what should we address in guidance to give the greatest clarity?
A structured risk assessment process should be developed which includes examples of risks for each principle and questions which help the risk assessor determine if the AI system / intended AI use case would fall into a high-risk category. Ideally such examples and guidance should be produced for each major economic sector (healthcare, education, agriculture, etc). Specific applications of AI within domains, such as healthcare, justice, and (non-tertiary) education may benefit from being explicitly identified as high-risk, as this will create more certainty for developers and deployers working in those domains. If developers or deployers in those domains see the burden of risk assessment for any usage of AI as being too high, the natural response will be to forbid the use of AI altogether, with the result that potential benefits are not realised, or that greater risk emerges from the unsanctioned and ungoverned “shadow” usage of AI. There are already notable examples of both of these outcomes within Australia.
4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)?
If so, how should we define these?
A list of specific high-risk use cases similar to those in the EU AI Act should be considered. Usages such as social credit scoring, indiscriminate collection of facial images for facial recognition, identification in public without pre-approval in specific scenarios, etc, should be defined in terms of the effect being delivered by the AI system without specific reference to how the effect is implemented. By identifying applications of AI that are not acceptable in any circumstance, this reduces the burden of risk assessment and provides more clarity for AI system developers / deployers.
5. Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?
Please provide any additional comments.
While GPAI represents a challenge to governance because of the difficulty in predicting all of the ways that GPAI may be applied, there may be AI technologies which emerge which represent an equal or higher risk even if they are much more specialised (narrow AI). The principle-based approach to risk assessment correctly targets the potential impact rather than attempting to define risk in terms of the technology itself, which will no doubt continue to evolve.
6. Should mandatory guardrails apply to all GPAI models?
Please provide any additional comments.
At this time, we believe mandatory guardrails should be applied to all GPAI models. However, a mechanism is needed for keeping the burden of compliance from falling on primarily on deployers of GPAI in non-high-risk settings.
There is a need to determine if adequate testing has been done and adequate monitoring is in place to ensure that where a GPAI model is being used in a context that would otherwise not be considered high-risk. Similarly, there is a need to monitor GPAI usage to ensure that the usage of the GPAI remains at this (acceptable) risk level.
Consideration should be given to a mechanism by which such usage could be assessed as “pre-approved” or “compliant” by virtue of the vendor of a GPAI to providing evidence of guardrails being in place that would satisfy the requirements of ensuring the model cannot be used in a high-risk way in a given context. This could take the form of a compliance certificate or equivalent evidence. This product-based approach to risk management would allow deployers and end-users to make use of GPAI solutions safely in low-risk contexts and enable the benefits of those AI use cases to be realised with relatively low overhead on the consumer.
The challenge is that any highly capable GPAI model may also require the deployer of such a model to also conduct a risk assessment and testing/monitoring to confirm that the usage is and remains non-high risk, which is effectively equivalent to making the full set of guardrails mandatory, and as such may stifle the use of GPAI in settings that would otherwise deliver benefit.
7. What are suitable indicators for defining GPAI models as high-risk?
8. Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings?
Please provide any additional comments.
Guardrail 4 should be more explicit in saying that AI models and systems need to be tested and monitored once deployed to evaluate that model performance is acceptable for the intended purpose. While it is implied that the performance level is acceptable for the intended purpose, it would be preferable to make this requirement explicit. This clarifies the expectation that when applying IOS/IEC 29119-11 or any other AI testing process that it is done with reference to well defined acceptance criteria.
Are there any guardrails that we should add or remove?
9. How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve Indigenous Cultural and Intellectual Property?
As noted in our response to point 2, consultation with First Nations people is an essential element in making these guardrails effective. This process should include specific initiatives to enable a deep exchange of understanding of the relevant concepts, including technical and legal concepts relating to AI, data sovereignty, ICIP and First Nations knowledge management practices to enable informed decision making on all sides. Provision should be made for First Nations people to have the right to withhold permission for the use of their knowledge and cultural protocols within AI systems and for processes to be applied to ensure that they are not disadvantaged by withholding such knowledge.
10. Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately?
Please provide any additional comments
The requirements for AI model developers and AI model/system deployers to be transparent is an important part of distributing responsibility across the supply chain. The proposed regulations should make clear what information needs to be provided by each party in the supply chain to support determination of responsibility in the case of a failure by an AI model / system. More detail is needed on what will be required of AI model developers and deployers with respect to any concepts of product certification that may emerge in the process of implementing a practical regulation process.
11. Are the proposed mandatory guardrails sufficient to address the risks of GPAI?
Please provide any additional comments
In line with our comments relating to points 6 and 10, more consideration needs to be made for how to avoid the burden of compliance for low-risk usages of GPAI falling upon deployers, while ensuring that adequate governance of GPAI solutions is still maintained. Consider the situation where a general "government approved" Retrieval Augmented Generation (RAG) solution has been developed as an alternative to using ChatGPT in order to ensure compliance with requirements like record keeping, but the adoption of that general solution will still require additional testing when put into use within a specific department where the use is high risk. For example, given a risk assessment of an intended use case has been completed, the tourism department may be able to use the solution "out of the box" but the police department may need to do additional testing and monitoring to ensure that their usage of the same RAG solution complies with mandatory guardrails, given part of the risk in that context will arise from the documents that the RAG system is given access to, and thus there is no way in which a general "pre-approval" could apply without additional testing and monitoring in place.
12. Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?
Please provide any additional comments
Yes. Similar to the limits on scope for regulations such as the privacy act, SME's beneath a given revenue threshold could be made exempt from the requirements of the mandatory guardrails, or a reduced set of guardrails could be applied, so as to not unduly burden them with processes that they may not be able to adequately perform. Support should be provided in the form or education and guidelines on responsible use of AI to protect both the SME organisations and their clients / stakeholders from potential harms.
Placing the burden of compliance on commercial vendors to ensure that their AI models / systems are used in a context which is safe will help reduce risk in a practical way. For example, if an SME is using an AI model / system in the context of an integrated Software as a Service solution, the responsibility for compliance may require the SaaS provider to provide monitoring solutions that will detect any unsafe usage conditions, such as providing the AI system with unintended access to personally identifiable information or other sensitive material. Usage of such features could become part of the conditions under which the AI model can be used in the context of a vendor’s “product certification”, without significant compliance effort being borne by the consumer. In this circumstance, it may be sufficient for the AI system deployer to comply with guardrail 10, in which they confirm that the AI system has been deployed / is being used with conformity to the vendor’s published usage limitations.
13. Which legislative option do you feel will best address the use of AI in high-risk settings?
What opportunities should the Government take into account in considering each approach?
A whole of economy approach is key to aligning with other jurisdictions that already have similar AI Acts in place. The opportunity to update and improve current regulatory frameworks across human rights, privacy, copyright, ICIP, etc. should be considered in conjunction with any of the regulatory options being proposed. By putting a consistent cross-economy AI Act in place, there is an opportunity to align with other jurisdictions around the world, reducing the risk that Australia becoming the market place for poor quality products that carry increased risk of harm, creating better opportunities for export, and building a domestic capability in AI assurance which will be required by model developers providing services in the Australian market.
14. Are there any additional limitations of options outlined in this section which the Australian Government should consider?
15. Which regulatory option(s) will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology?
Please provide any additional comments.
In the context of creating an AI Act, it would be important to establish an appropriate regulatory body with the expertise and ongoing industry engagement required to enable the application of guardrails to high-risk AI in a meaningful way. In addition, the regulatory framework should make clear what the consequences of non-compliance with the mandatory guardrails will be.
16. Where do you see the greatest risks of gaps and inconsistencies with Australia’s existing laws for the development and deployment of AI?
Inconsistencies in the existing regulatory frameworks which apply to AI usage currently create uncertainty for both AI developers and deployers, with the result that many risk-adverse organisations are avoiding the adoption of AI, allowing for the development of ungoverned “shadow” usage practices. By putting a whole of economy regulatory framework in place, with an appropriate level of support for the development of a skilled domestic workforce to enable meaningful compliance with that framework, is a welcome and positive step forward.
Which regulatory option best addresses this?
Please explain why.
By putting a cross-economy AI-specific Act in place, all stakeholders can have a single point of reference for clarity, reducing the scope for confusion or inertia, both domestically and internationally. As noted in our comments on point 13, the opportunity to address inconsistencies in other regulatory frameworks by strengthening the relevant elements supporting human rights, privacy, intellectual property, especially indigenous cultural intellectual property, should be taken. However, if this were the only approach taken, it is likely that the reform would take too long and the regulatory landscape would remain inconsistent and fragmented, hampering Australia's efforts to put appropriate controls in place to manage AI-related risks while also enabling benefits to be realised.
Upload 1