Make a submission: Published response

Anchoram Consulting

Published name

Anchoram Consulting

1. Do the proposed principles adequately capture high-risk AI?

No

Please provide any additional comments.

The coverage of the principles across does not explicitly include long term and multi-generational adverse impacts on individuals or groups of individuals. This is especially important when considering impacts to children or vulnerable groups. It is also important this is explicitly indicated when an adverse impact may reach beyond a single generation.
Principle f indicates ‘severity and extent’ which is described as including the ‘scale and intensity’ but does not include the length of time an impact may adversely impact.

Are there any principles we should add or remove?

Yes

Please provide any additional comments.

As above, either amending principle ‘f’ to include explicit mention of length of time. Or add a principle to clearly separate breadth of the extent of adverse impacts from length of time of the adverse impacts.

Please identify any:

No low-risk use cases are unintentionally captured as each Use Case would have examples where inappropriate AI use would create adverse impacts that fit within the prior listed principles.
Categories of use that may be considered for separate treatment should include assessment of AI use within the national security intelligence community. These users may have a requirement for using AI where the benefits to national security outweigh harms to individuals where these may be able to be controlled via other methods such as via Classification of information. These need to be clearly defined with appropriate oversights to prevent accidental or deliberate misuse.
Categories of users already identified by legislation as requiring additional protections (e.g. minors, those vulnerable to discrimination, etc) should also be considered for separate treatment to ensure that the additional protections are also afforded when using AI. This can include additional protections related to the use of AI with these groups, and additional responsibilities for disclosure/explanation of AI and its impacts to those responsible for minors/vulnerable people.

2. Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country?

No

Please provide any additional comments.

The inclusion of reference to Australian human rights law, and listing of cultural groups within Principle d would also capture First Nations people and communities as groups that would need consideration including in line with protection of First Nations culture already within Australian law.
Consideration for Country would be captured in a combination of assessing the use of AI against principle d and e of consideration of culture and environment.

3. Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed?

YES the principles give enough clarity and certainty

If you prefer a principles-based approach, what should we address in guidance to give the greatest clarity?

A principles-based approach provides clarity on the priorities for safe AI use and certainty on its protective intent. There are aspects that must be addressed to ensure that it is applied consistently and as intended.
This includes addressing the order or precedence for legislation when there is contention between the AI guardrails and other existing regulations. This may need to consider industry specific regulations or at the very least, provide method for assessing and determining the priority (e.g. application of the most protective requirement along with justification of the assessed decision, or requesting guidance from the industry regulator).
There will also need to be consideration on what evidence, measures and metrics are used to assess compliance to the guardrails in order to ensure consistency and appropriate application against intent.

How can this list capture emerging uses of AI?

The principles-based approach is more resilient to emerging uses of AI as the adverse human impacts will not change based on the technology. For example, embedding discrimination within a decision chain will remain regardless of the AI technology. Though there is more resilience as principles than a list, it is not immune to not capturing adverse impacts from emerging uses of AI or AI technologies. Therefore regular review of the principles ability to capture use of AI must still occur.

4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)?

Yes

If so, how should we define these?

Unacceptable levels of risk should in the first instance align with existing laws where the adverse impact has the potential to fall within those existing definitions. When undertaking the analysis, the potential for adverse impacts that fall into the illegal definitions on primary, secondary and unrelated third-party individuals needs to be assessed.
There also should be consideration of banning (or at least regulating and monitoring) the undeclared use of AI in high-risk settings. That is where AI is used in such a way that it is not clearly indicated to the casual observer that AI is embedded within the system. In high-risk settings, it is essential that decision traceability is possible for clear understanding of responsibility and accountability.

Please provide any additional comments.

Though these are implied through the principles’ inclusion of the rule of law, as these are already defined as illegal it should be explicitly indicated to remove any doubt it is an unacceptable level of risk.

5. Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?

Yes

Please provide any additional comments.

As above, by using principles that focus on guarding against adverse impacts and harms, the changes to technology including emerging forms of high-risk AI are less likely to require changes to the principles. However, there must still be a regular review and validation to ensure that it remains so.

6. Should mandatory guardrails apply to all GPAI models?

Yes

Please provide any additional comments.

There should be no reason why GPAI models should be excluded from the guardrails as they also have the potential to do harms as indicated in the principles or be used in the Domains listed for use cases.

7. What are suitable indicators for defining GPAI models as high-risk?

Define high-risk against the principles

8. Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings?

No

Please provide any additional comments.

There is insufficient information regarding how the guardrails intend on ensuring risks are mitigated for third parties affected by AI that may not have the ability to opt-out as described previously in the paper. Guardrail 6 indicates that end-users are informed, however impacted third-parties are not actively using the AI systems or directly engaging with the organisation deploying the system and therefore guardrail 6 as current does not apply.

Are there any guardrails that we should add or remove?

Yes

What guardrails should we add or remove?

Other

9. How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve Indigenous Cultural and Intellectual Property?

Explicit inclusion within the risk assessment on impacts to First Nations knowledge and cultural protocols and whether the AI system would result in adverse impact to Indigenous Culture and Intellectual Property.

10. Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately?

Yes

Please provide any additional comments

Assignment of knowledge for the creation/composition and intended and possible usage of the AI tool to the developer, and the context of deployment to the deployer is pragmatic.

11. Are the proposed mandatory guardrails sufficient to address the risks of GPAI?

Yes

12. Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?

Yes

Please provide any additional comments

Clear alignment to existing regulatory frameworks where the adverse impacts would be guided by these. This would enable re-use of evidence and artefacts across both regulatory requirements.

13. Which legislative option do you feel will best address the use of AI in high-risk settings?

A whole of economy approach – introducing a new cross-economy AI Act

What opportunities should the Government take into account in considering each approach? 

The main consideration that must be taken into account is how widely used AI is across the entire economy and that it impacts every individual, group and cultural setting. This is why a whole of economy approach must be taken for legislation in a similar way to Privacy that impacts every individual. The government can take into account the opportunity to leverage knowledge and experience on the effective (or not effective) aspects of whole of economy legislation such as Privacy or legislation that covers multiple industries technically such as SOCI when developing AI legislation.

14. Are there any additional limitations of options outlined in this section which the Australian Government should consider?

Yes

Please provide any additional comments.

There may be limitations in the ability for the guardrails to be applied as intended when considering the varied industries and groups involved. This is directly related to the lack of common nomenclature and terminology required to undertake the activities required for aspects of the proposal such as communicating or informing various stakeholders. Without this, the accuracy of assessment including whether the deployment of AI is considered high-risk may be questionable.

15. Which regulatory option(s) will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology?

A framework approach – Introducing new framework legislation to adapt existing regulatory frameworks across the economy

Please provide any additional comments.

Introducing new frameworks to adapt existing regulatory frameworks will allow for consistency and leverage already existing and understood regulatory structures. It will also ensure that domains or industries without existing regulatory frameworks that can adopt AI guardrails (option 1) are covered.

16. Where do you see the greatest risks of gaps and inconsistencies with Australia’s existing laws for the development and deployment of AI?

The greatest risk lies in the inability to perform the initial assessment to indicate a high-risk setting. As mentioned, this can be a result of a lack of defined nomenclature between various responsible groups and industry domains. This risk could also be realised when there is a lack of indication on regulatory priorities where analysis of risk is reduced due to conflicting requirements from other regulations or contention between remediation instructions from different regulatory authorities.

Which regulatory option best addresses this?

Other

Please explain why.  

This is not specific to a particular regulatory option but should be considered with whatever option is used. It will establish a common industry agnostic language to communicate key information needed to make informed risk analysis. It will allow for unambiguous expectations on thresholds for human, cultural and community harms irrespective of industry/domain and those nuances.