Make a submission: Published response

K. Geappen

Published name

K. Geappen

1. Do the proposed principles adequately capture high-risk AI?

No

Please provide any additional comments.

There is insufficient coverage for perpetuality of some AI, especially in relation to deployments that directly or indirectly impact children/youth. Considerations for risks and safety that extend over a lifetime and can possibly impact on subsequent generations requires additional exploration and guidance. These aspects are not usually thought of in risk considerations so would be likely to be overlooked.

Are there any principles we should add or remove?

Yes

Please provide any additional comments.

As above, in relation to use of AI with children/youth. Unintended consequences may not be apparent for years, even with the best intentions (e.g. use of AI to create bespoke learning packages may have unintended perpetual consequences if not catering for undiagnosed neurodivergence). Guardrails that protect the young and their information must be included. This must include direct usage of AI with children/youth and the use of AI with their information.

Please identify any:

No low-risk use cases that are unintentionally captured.
Categories of uses that must be treated separately are for minors/youth and those that are differently-abled. This collective group of individuals are vulnerable in three ways across all the domains already listed. Two from bias, and one for consequence/impact. This means these groups must be provided additional protections from the elevated risks related to AI use with these individuals.
1. Bias from smaller and more varied data sets. Both minors and differently-abled individuals have less data points to train systems with, and even if they are available, these data points SHOULD be more protected. This means the bias inherent in the training is likely to be of impact.
2, Bias in the base algorithms of the AI systems. When creating programs including AI, developers will draw on assumptions or expectations. With both groups of individuals, these are more varied and usually fall outside the standard assumptions or expectations. For example, children are not expected to have fully developed executive function for decision making, however when this milestone occurs can be widely varied and still be considered normal. Therefore a developer creating a program would build in bias by assuming a level of function.
3. The consequence/impact of AI performing inappropriately for these individuals can be catastrophic. For example, an AI system used for bespoke delivery of education programs to a student may inadvertently cause 'learned helplessness' by not considering different developmental milestones or even undiagnosed neurodivergence. The 'learned helplessness' may go on to impact the student's confidence, development and achievements and worst case impact their mental health and safety.

2. Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country?

Yes

Please provide any additional comments.

Similar to the above response for minors and differently-abled people, there must also be additional guidance on considerations for bias (data set and assumptions in algorithm creation) and for consequences that need to be culturally aligned.
The capturing of harms must be done in consultation with First Nations elders and leaders. It must be general enough to cover the many Nations across the country and be actionable to further seeking engagement with the appropriate relevant peoples. The principles must include appropriate reference to where AI developers and deployers can contact First Nations representatives to further discuss specific AI uses and use cases. This must take into account the vast and varied First Nation custodians across the country, each with their own unique aspects and unique knowledge of their area of Country. Without consideration of this, the potential harms from AI are compounded by advice that is not appropriate or relevant to the locality of the AI system. If an AI system is to be used Nationally, this must consider all First Nations impacted.

3. Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed?

YES the principles give enough clarity and certainty

If you prefer a principles-based approach, what should we address in guidance to give the greatest clarity?

A principles based approach is preferred as it is less prone to being obsoleted by emerging trends and technologies. However it is vulnerable to interpretation in a manner to avoid responsibility. The vulnerability must be addressed in such a way that responsibility can not be avoided or redirected to a scape-goat.

How can this list capture emerging uses of AI?

Categorisation based on impacts/consequences (both positive and negative) to individuals, groups, and Australia could be used to capture emerging use of AI where the traditional industry domain is unknown. Or the technology itself is unknown. The constant will be that people and the nation will be impacted, therefore by baselining against these constants, changes will be captured.

4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)?

Yes

If so, how should we define these?

High-risk use cases must include those where there is an exclusion of humans in decision making. Or that the system includes humans in decision making however it is not at the appropriate juncture of the decision process flow.

5. Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?

Yes

Please provide any additional comments.

Though as above, this can be improved by including greater definition on the consequence/impact to the constants of individuals, groups, and Australia.

6. Should mandatory guardrails apply to all GPAI models?

Yes

Please provide any additional comments.

This will reduce the likelihood of interpretation to not apply the mandatory guardrails. Or that a GPAI model is initially assessed as not requiring guardrails however much later on, it is determined its impact is so great that guardrails should have been applied.

7. What are suitable indicators for defining GPAI models as high-risk?

Define high-risk against the principles

8. Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings?

No

Please provide any additional comments.

Recognising the need for flexibility in the framework, there needs to be settings or guidance on commonly understood thresholds, nomenclature and context when requiring transparency and accountability. The current technology environment already has misalignment between developers and deployers/end users in relation to understanding of disclosed functionality or risks. Whether this be due to deliberate lack of transparency or lack of common nomenclature, expectations or understanding between developer industries and user industries (e.g. small medical clinics deploying technology may not have the background to understand product disclosure and description for cloud technologies).
For the requirement for AI testing, transparency and accountability there needs to be additional settings to aid such understanding and reduce deliberate obfuscation of information. The consequence of misunderstanding or obfuscation in high risk settings is multiplied with AI and therefore I think this is essential across the guardrails.

Are there any guardrails that we should add or remove?

Yes

What guardrails should we add or remove?

Other

Please provide any additional comments

The additional guardrail requires documentation of deliberate analysis to exclude groups previously identified in this response as having the most impact of perpetual risk from AI whether directly or as a third-party impacted. This means that each use in a high-risk setting regardless of domain will have to explicitly justify why these groups are not impacted, and if they are impacted, how the risks are being managed. This would then need to tie into the guardrails related to informing end-users (in this case the appropriate guardian or safety (e.g. child safety) authority). The initial analysis must be reviewed and updated regularly as the impacted groups from AI are likely to change over time.

It is well known that governance is not always maintained, updated and reviewed for currency. As AI is a fast moving technology, all the guardrails must be amended to include explicit expectations of regular review, update and endorsement by an appropriate organisational authority. Without this expectation, the aspects of accountability will not be as strong as needed.

9. How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve Indigenous Cultural and Intellectual Property?

As above with underage or vulnerable groups, the need to explicitly analyse and document why they are not impacted directly or indirectly. When analysis shows impact, inclusion of documented consultation with the appropriate First Nation's groups impacted. There must also be a regular review and update including if initial analysis shows they have not been impacted as this may change over time.

10. Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately?

Yes

Please provide any additional comments

On principle, the developers must hold responsibility for appropriate disclosure of their product in a manner and language that is accessible and materially usable by deployers. This is to include the intended use cases, functionalities, scope boundaries, and any known warnings. Deployers are responsible for implementing, using and applying the technology within the bounds identified by the developers and ensure their deployment use case fits within those described by the developer.

The key aspect to this is ensuring the nomenclature and expectations between developer and deployer are designed such that there is no grounds to claim misunderstanding which then would lead to questions in accountability and responsibility.

11. Are the proposed mandatory guardrails sufficient to address the risks of GPAI?

No

How could we adapt the guardrails for different GPAI models, for example low-risk and high-risk GPAI models? 

GPAI models change, however end user groups and impacts are relatively static. Adapting the guardrails to focus on the human impacts and assessment of these would go part the way to ensuring the guardrails are sufficient to address risks whilst being more robust to technology changes.

12. Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?

Yes

Please provide any additional comments

Production of guidance documentation with a focus on human impacts. This can be in the form of a set of questions on a flow-chart or assessment matrix. The aim is to reduce the knowledge gap for small-med business who may not have ready access to technical specialists and enable a consistent application of risk across businesses to aid analysis and decision making.

The use of common English description of human impacts allows such a tool to be more accessible to deployers who may not be as technically minded as developers. It ensures developers categorise and mark their product in a way that can be understood by deployers. As above where it becomes more robust to technology changes, means small-med business are not constantly required to update their knowledge and understanding with technology.

13. Which legislative option do you feel will best address the use of AI in high-risk settings?

A whole of economy approach – introducing a new cross-economy AI Act

What opportunities should the Government take into account in considering each approach? 

None of the three options are without cons, however AI risk and safety should be viewed similarly to Privacy risk and therefore a whole of economy approach is most applicable. A whole of economy approach allows for the establishment of baseline acceptable human risks especially to underage and vulnerable groups regardless of existing regulatory frameworks for specific industries/domains. It allows for consistency in government management of these in a similar manner such as for Privacy. A whole of economy approach also reduces the burden on small-med business and understanding from end users of personal impacts as there is a level of consistency regardless of industry.

This does not preclude leveraging those existing industry/domain specific frameworks. Contrary, this should be an opportunity to highlight and leverage those frameworks specific to risk and information security that are already established. It would be an opportunity for deliberate analysis on the possibility of wider leveraging of controls from one industry framework that would be pragmatic for another, and uplift of any that are old and require refresh.

The use of existing regulatory bodies should also be incorporated and leveraged.

14. Are there any additional limitations of options outlined in this section which the Australian Government should consider?

Yes

Please provide any additional comments.

With consideration for human related impacts there needs to be sensitivity around the discussion with regards to different cultural and community groups where impacts may be considered differently or such groups are already vulnerable to discrimination and therefore may require additional protections that don't inadvertently compound discrimination.

15. Which regulatory option(s) will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology?

A whole of economy approach – introducing a new cross-economy AI Act

Please provide any additional comments.

As above, focusing on the aspect that is least likely to change with step-changes in technology is the impact to people.

16. Where do you see the greatest risks of gaps and inconsistencies with Australia’s existing laws for the development and deployment of AI?

Where there is contention between the priorities of existing laws and the development of AI. A common example used is the balance between privacy and the need for AI to be trained on real data to be representative of the population.

Which regulatory option best addresses this?

A whole of economy approach – Introducing a new cross-economy AI-specific Act (for example, an Australian AI Act).

Please explain why.  

Regardless of industry/domain, again the human factor (in the example above) for Privacy is a common feature and therefore a whole of economy approach to balancing the need for Privacy and data required for AI would bring consistency.