The Australian Government is now operating in line with Caretaker Conventions, pending the outcome of the 2025 federal election.

Make a submission: Published response

Western Power

Published name

Western Power

1. Do the proposed principles adequately capture high-risk AI?

Yes

Are there any principles we should add or remove?

No

Please identify any:

Low-Risk Use Cases Unintentionally Captured:
Administrative AI Tools: AI applications used for administrative purposes such as scheduling, basic data analysis, and document management often pose minimal risk but are crucial for operational efficiency. These tools do not make autonomous decisions that could lead to significant adverse effects and thus should be categorized distinctly from high-risk applications.

Predictive Maintenance Systems: AI systems designed for predictive maintenance in industrial settings typically analyse equipment condition data to forecast potential failures. While these systems are integral to preventing operational disruptions, they do not control the equipment directly and have layered safety checks, representing a lower risk profile.

Certain AI applications in asset management and energy efficiency may be incorrectly categorized as high-risk. These tools do not pose significant safety or cybersecurity risks but are essential for improving operational efficiency and lowering costs.

Categories of Uses That Should Be Treated Separately:
Smart Meters and Consumer Data Management: AI has extensive applications for use in smart metering and consumer data management to improve energy efficiency and tailor services to consumer needs. These systems would handle large volumes of consumer data to provide insights into usage patterns, potential savings, and to detect irregularities in energy consumption that could indicate issues such as leaks or faults. While these AI applications involve personal data, the risk they pose is primarily related to data privacy and should be regulated under existing data protection laws rather than the more stringent measures designed for high-risk AI applications.
Renewable Energy Integration: AI plays a pivotal role in integrating renewable energy sources like wind and solar into the power grid. These systems optimize the energy output from renewable sources and predict fluctuations in energy generation due to changing weather conditions. Given the importance of promoting renewable energy for sustainable development, it is crucial that AI applications in this area are encouraged through appropriate regulatory measures that do not impose excessive burdens. Special consideration should be given to ensure that these AI systems can continue to be developed and used effectively without unnecessary regulatory constraints.
Grid Optimization and Load Forecasting: AI systems that can manage electricity grid optimization and load forecasting could play a crucial role in ensuring energy efficiency and stability. These systems can analyse vast amounts of data to predict energy demand and adjust supply accordingly, which could support our operations team in preventing outages and managing peak loads. While these AI applications are vital for operational efficiency, they pose a lower risk compared to AI systems that directly control power generation or distribution mechanisms. It would be beneficial for regulations to recognize the low-risk nature of these analytical AI tools and categorize them appropriately to avoid stringent controls that could hinder their effectiveness and development.

2. Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country?

Yes

Please provide any additional comments.

To enhance the proposed AI guardrails and better capture harms to First Nations people, communities, and Country, several improvements could be considered:
1. Inclusive Consultation and Engagement: Establish an obligation that requires ongoing consultation whereby an AI tool is implemented that:
A. Poses an elevated risk to indigenous communities, persons or ICIP;
B. Is used to make decisions relating to indigenous communities, persons or ICIP; or
C. Will be trained on data including information about indigenous communities, persons or ICIP.
This would ensure that AI systems are developed with a deep understanding of cultural sensitivities and the specific needs of these communities. Engaging with First Nations Elders and knowledge keepers can provide crucial insights that help in designing AI systems that respect cultural protocols and heritage.
2. Cultural Impact Assessments: Introduce mandatory cultural impact assessments for AI projects that could affect First Nations communities. These assessments should evaluate how AI systems might interact with and impact First Nations cultural practices, lands, and traditions. The assessments should be carried out by culturally competent professionals and should include clear guidelines on mitigating any identified negative impacts.
3. Representation in Regulatory Bodies: Ensure that First Nations representatives are included in the regulatory bodies that oversee the implementation of AI guardrails. Their presence can help in making informed decisions that consider the socio-cultural dimensions of AI applications.
These suggestions aim to create a regulatory environment where AI technologies are developed and deployed in a manner that is not only technologically sound but also culturally informed and respectful of First Nations people’s rights and heritage. Such an approach not only mitigates harms but also fosters trust and collaboration between technological developers and Indigenous communities.

3. Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed?

YES the principles give enough clarity and certainty

If you prefer a principles-based approach, what should we address in guidance to give the greatest clarity?

1. Clear Definitions: Provide precise and unambiguous definitions for key terms such as "high-risk AI," "AI deployers," and "AI developers." Clear definitions will help stakeholders understand their responsibilities and the scope of the regulations.
2. Contextual Application: Elaborate on how the principles apply in different contexts and industries. AI technologies are deployed across diverse sectors with varying risks and ethical considerations. Detailed guidance on applying these principles across different sectors will help ensure that AI is used responsibly and safely regardless of the application area.
3. Risk Assessment Criteria: Offer detailed criteria for conducting risk assessments of AI systems. This should include factors to consider when determining the potential impact of AI on public safety, privacy, and ethical standards. Providing a methodology for risk assessment will help organizations evaluate the implications of their AI systems systematically.
4. Compliance Mechanisms: Outline specific compliance mechanisms that organizations can implement. This includes auditing procedures, documentation requirements, and reporting protocols that align with the principles. Clarity in compliance requirements will facilitate adherence to regulations and promote transparency.
5. Ethical Considerations: Incorporate guidance on ethical considerations, particularly concerning fairness, non-discrimination, and transparency. Offering examples of ethical dilemmas and suggested best practices for resolution could guide organizations in making informed decisions that align with societal values.
6. Feedback and Iteration: Establish a mechanism for ongoing feedback from AI stakeholders to continuously refine and update the guidance based on new developments and insights in AI technology and its applications.
By addressing these areas in the guidance, a principles-based approach can provide the flexibility needed to adapt to rapid technological changes while ensuring that AI systems are developed and used responsibly.

How can this list capture emerging uses of AI?

We believe that the principles approach is the most effective way of capturing emerging uses of AI, as it is difficult to predict an exhaustive list and regular amendments would make compliance difficult.

4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)?

Yes

If so, how should we define these?

Yes, there are certain high-risk AI use cases that governments should consider banning or severely restricting in their regulatory responses due to the unacceptable levels of risks they pose. These include:
1. Autonomous Weapon Systems: AI systems that can select and engage targets without human intervention should be considered for prohibition due to the ethical and humanitarian risks they pose. Defining these systems can be based on their capability to operate without meaningful human control.
2. Mass Surveillance: AI applications that enable widespread surveillance without targeted cause, especially those that process biometric data on a mass scale, can severely infringe on privacy and civil liberties. Such systems should be defined by their scope, scale, and the extent of data processing that violates reasonable expectations of privacy.
3. Social Scoring Systems: AI systems used for social scoring by governments, which can lead to discriminatory outcomes or restrict individuals' access to services based on their behaviour or personal data, should also be considered for prohibition. These should be defined by their use in public policy decisions that can lead to exclusion or discrimination.
4. Predictive Policing Systems: AI systems used in predictive policing that rely on historical crime data to predict future crimes. These systems can perpetuate racial biases and disproportionately target minority communities. Such systems should be defined by their use in law enforcement for predictive purposes based on historical data, which may reinforce existing societal biases.
5. AI in Decision-Making for Essential Services: AI systems used for making or assisting decisions in critical areas such as healthcare, finance, or social services, where incorrect or biased decisions can significantly impact individuals' lives. These systems should be defined by their autonomous decision-making capabilities in sectors where the consequences of errors or biases are particularly severe.
6. Emotion Recognition Systems: AI that claims to detect emotions based on facial expressions, voice intonations, and other biometrics. These technologies can invade personal privacy and are often scientifically dubious. Such applications should be defined by their use in sensitive settings such as hiring processes, law enforcement interrogations, and mental health assessments.
In defining these high-risk use cases, it's crucial to focus on their potential for harm, including violations of international human rights standards, impacts on safety and public welfare, and the potential for irreversible damage or societal harm.

5. Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?

Yes

Please provide any additional comments.

needed on how multi-use systems like GPAI might evolve in unexpected ways. More detailed guidance on how emerging risks will be monitored and regulated is essential, particularly in the rapidly evolving energy sector where we may use AI for a range of purposes.

6. Should mandatory guardrails apply to all GPAI models?

Yes

Please provide any additional comments.

mandatory guardrails should apply to all GPAI models, but the level of scrutiny should be risk-based. For example, GPAI systems used in operational efficiency should not face the same stringent guardrails as those used in critical infrastructure protection or cybersecurity.
Similarly, GPAI models that are deployed within companies for internal administrative uses such as repetitive admin tasks, forecasting and modelling etc that are not used to make decisions should not erroneously be classified as high risk simply because it has generative capabilities; consider, for example, private instances of Microsoft CoPilot. It would be considered GPAI under the proposed definitions, but it would be an overreach to consider that high-risk or impose the slew of obligations on this system.

7. What are suitable indicators for defining GPAI models as high-risk?

Define high-risk against the principles

8. Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings?

Yes

Please provide any additional comments.

The proposed mandatory guardrails, as they currently stand, do offer a structured approach to mitigating risks associated with the use of AI in high-risk settings. These regulations are designed to ensure that AI systems operate within defined ethical and safety standards, aiming to prevent adverse outcomes such as breaches of privacy, unfair discrimination, and other potential harms.
However, the effectiveness of these guardrails in fully mitigating risks may require further refinement. While the framework establishes a baseline for accountability and transparency, the rapid evolution of AI technologies means that regulatory measures must be continuously updated to keep pace with new developments. This includes adapting to emerging AI capabilities that might not be fully envisioned in the current regulatory proposals.
Furthermore, the application of these guardrails across diverse sectors with varying risk profiles poses a challenge. Ensuring that the guardrails are sufficiently flexible to apply to different contexts while being robust enough to address specific high-risk scenarios is crucial.
To strengthen these guardrails, it would be beneficial to incorporate more dynamic, adaptive regulatory mechanisms that allow for real-time updates based on ongoing risk assessments and technological advancements. Additionally, further engagement with industry and stakeholders will enable further refinement of the guardrails and the legislations supporting definitions and interpretation.
Overall, while the proposed guardrails are a significant step toward safer AI deployment in high-risk settings, their capacity to fully mitigate risks depends on their adaptability and the specificity with which they address the complexities of modern AI applications

Are there any guardrails that we should add or remove?

No

9. How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve Indigenous Cultural and Intellectual Property?

See suggestions in question (2).

10. Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately?

Yes

Please provide any additional comments

The proposed mandatory guardrails seek to distribute responsibility appropriately across the AI supply chain and throughout the AI lifecycle. However, the current guidelines sometimes blur the lines between the roles of developers and deployers, particularly in scenarios where organizations purchase off-the-shelf AI solutions and then customize or configure these systems by integrating their own datasets. This common practice raises the question: At what point does an organization transition from being merely a deployer to assuming the responsibilities of a developer?
For instance, an organization that customizes an AI system extensively, beyond mere superficial adjustments, by training it with proprietary data, might more closely resemble a developer's role. This distinction becomes even more critical if the organization subsequently distributes the modified AI system to customers, potentially amplifying the impact and scope of any associated risks. In such cases, the organization might bear a greater share of responsibility for ensuring that the AI system complies with regulatory standards throughout its lifecycle, akin to that of a developer.
The current guardrails could be enhanced by:
o Providing clearer definitions and criteria that delineate the responsibilities of developers versus deployers, especially when customization or configuration blurs these roles.
o Establishing guidelines that specify how responsibilities shift when an AI system is modified substantially by an end-user and when such systems are redistributed, whether internally within an organization or externally to customers.
o Implementing regulatory mechanisms that assess the extent of modifications made to AI systems and the nature of data integration to determine the reallocation of responsibilities more accurately.
Such clarifications in the guardrails would not only help in clearly assigning accountability but also ensure that all parties involved in the AI lifecycle—from development to deployment and beyond—are aware of their regulatory obligations, thereby safeguarding the integrity and safety of AI applications.

11. Are the proposed mandatory guardrails sufficient to address the risks of GPAI?

No

How could we adapt the guardrails for different GPAI models, for example low-risk and high-risk GPAI models? 

The proposed mandatory guardrails provide a foundational framework for managing risks associated with General-Purpose AI (GPAI), yet they may not be entirely sufficient to address all the risks these technologies pose, especially given the diverse capabilities and applications of GPAI. The current one-size-fits-all approach might be too generic for the varied complexities and impacts of different GPAI models.
To better address the risks and harness the potential of GPAI, the guardrails could be adapted as follows:
1. Risk-Based Classification: Implement a more nuanced classification system that differentiates GPAI models based on their risk levels. This would involve assessing factors such as the intended use, potential for harm, and the context in which the AI is deployed. High-risk applications, such as those involving significant decision-making powers in critical sectors (e.g., healthcare, criminal justice), would be subjected to stricter scrutiny and more robust regulatory requirements. For example, a GPAI that is deployed for internal use only with administrative tasks could not be held to the same standards and risk level as a GPAI being used by a political party to develop and disseminate campaigning media.
2. Dynamic Regulatory Framework: Develop a dynamic framework that can evolve with advances in AI technology. This could include provisions for regular updates to the regulations based on insights gathered from ongoing monitoring and evaluation of GPAI applications. Such a framework should allow for flexibility in adapting to new information about risks or advancements in AI safety technologies.
By implementing these adaptations, the guardrails could more effectively mitigate the risks associated with different types of GPAI models, ensuring that they contribute positively to society while minimizing potential harms.

12. Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?

Yes

Please provide any additional comments

To reduce the regulatory burden on small-to-medium sized businesses (SMBs) when applying AI guardrails, several strategies can be adopted:
1. Tiered Compliance: Implement a tiered regulatory framework that scales according to the size of the business and the risk level of the AI application. Lower-risk and smaller-scale operations could be subject to less stringent requirements, which would help SMBs avoid the disproportionate costs and resource demands associated with compliance.
2. Streamlined Processes: Develop streamlined processes and clear guidelines for compliance that are specifically designed for SMBs. This could include simplified paperwork, digital submission processes, and clear, jargon-free explanations of regulatory requirements.
3. Technical and Financial Support: Provide government-backed support programs that include technical assistance and potential financial subsidies to help SMBs implement necessary AI safety measures and compliance protocols. This support could be crucial in enabling SMBs to invest in compliance without jeopardizing their operational viability.
4. Flexible Implementation Timelines: Offer more flexible timelines for compliance for SMBs, allowing them extra time to adjust to new regulations. This consideration would acknowledge the limited resources of smaller businesses compared to larger corporations.
5. Regular Feedback Mechanisms: Establish mechanisms for SMBs to provide feedback on the impact of AI regulations on their operations. This continuous feedback can help regulatory bodies adjust guidelines to better fit the needs and capacities of smaller businesses.
Implementing these suggestions would help ensure that AI guardrails are practical and feasible for SMBs, encouraging compliance without stifling innovation or imposing undue financial burdens.

13. Which legislative option do you feel will best address the use of AI in high-risk settings?

A framework approach – Introducing new framework legislation to adapt existing regulatory frameworks across the economy

What opportunities should the Government take into account in considering each approach? 

In considering a framework approach for introducing new legislation to adapt existing regulatory frameworks across the economy for AI in high-risk settings, the government should take into account the following opportunities:
1. Flexibility and Adaptability: The framework approach allows for greater flexibility and adaptability in regulations, which is crucial given the rapid pace of technological advancements in AI. This approach can accommodate emerging technologies without the need for frequent legislative overhauls, enabling regulations to stay current with technological developments.
2. Consistency and Uniformity: A framework approach can provide a consistent and uniform regulatory environment across different sectors of the economy. This uniformity helps to avoid the confusion and compliance challenges that can arise from having disparate regulatory regimes for different industries, thus simplifying the regulatory landscape for businesses and regulators alike.
3. Sector-Specific Adaptations: While maintaining a consistent overarching framework, the government can also tailor specific regulations within that framework to address the unique risks and needs of different sectors. This ensures that sector-specific concerns, such as those in healthcare, finance, or transportation, are adequately addressed, providing a balanced approach that protects public safety without stifling innovation.
4. Stakeholder Engagement: The framework approach offers an opportunity to engage a broad range of stakeholders in the regulatory process, from AI developers and users to consumer groups and ethical watchdogs. This engagement is crucial for ensuring that the regulations are well-informed, practical, and broadly supported across the community.
5. International Alignment: Considering an internationally harmonized approach within the framework can enhance global cooperation on AI regulation. This alignment helps to manage the cross-border challenges of AI technologies and ensures that domestic businesses are not at a competitive disadvantage in international markets.
These opportunities highlight the potential benefits of adopting a framework approach to AI regulation in high-risk settings, suggesting a path forward that is both dynamic and robust, capable of supporting safe and innovative AI development and deployment.

14. Are there any additional limitations of options outlined in this section which the Australian Government should consider?

Yes

Please provide any additional comments.

Domain-Specific Approach:
- Scalability Issues: This approach might not easily scale across different sectors due to the unique challenges and requirements of each domain. Implementing bespoke regulations for each sector could lead to inconsistencies and make broad enforcement challenging.
- Adaptability Limitations: Rapid technological advancements may outpace the domain-specific regulations, requiring frequent updates that could be resource-intensive and slow to enact.
Framework Approach:
- Lack of Specificity: While offering flexibility, a framework approach might lack the detailed specificity needed to address unique risks in certain high-stakes areas, such as healthcare or financial services.
- Overgeneralization Risk: There is a risk of overgeneralizing regulations which might not adequately address the nuanced risks associated with specific AI applications, leading to potential gaps in protections.
Whole of Economy Approach:
- Complexity and Overreach: This approach could result in overly broad regulations that encompass too wide a range of applications, potentially stifling innovation across sectors by imposing unnecessary burdens where they might not be needed.
- Resource Intensive: Implementing and monitoring a whole-of-economy approach would likely require significant resources, which could be challenging to allocate effectively, potentially leading to enforcement gaps.
Each approach has its own merits and challenges, and the optimal regulatory strategy might involve a combination of these approaches, tailored to balance the need for innovation with safety and public interest protections.

15. Which regulatory option(s) will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology?

A framework approach – Introducing new framework legislation to adapt existing regulatory frameworks across the economy

Please provide any additional comments.

The framework approach is best suited to ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology. This approach provides the necessary flexibility and adaptability needed to accommodate rapid technological advances without necessitating frequent legislative overhauls.
By establishing a set of overarching principles and guidelines within a regulatory framework, adjustments can be made more dynamically to address emerging risks or exploit new opportunities as AI technologies evolve. The framework approach allows regulators to issue periodic updates or guidance based on the latest developments in AI, ensuring that the regulatory environment keeps pace with technological progress while maintaining robust protections.
Additionally, the framework approach can facilitate a more agile response to specific challenges that arise within different sectors by allowing for tailored adaptations. This ensures that the regulatory measures remain effective and relevant across various applications and industries, promoting both innovation and safety in the use of high-risk AI systems.

16. Where do you see the greatest risks of gaps and inconsistencies with Australia’s existing laws for the development and deployment of AI?

The greatest risks of gaps or inconsistencies with Australia's existing laws for the development and deployment of AI are primarily found in the areas of privacy, data protection, and non-discrimination. Australian laws may not currently fully account for the unique challenges posed by AI, such as algorithmic bias, the extensive collection and use of personal data, and the potential for invasive surveillance.
1. Privacy and Data Protection: Current privacy laws may not adequately address the complexities of data usage in AI systems, particularly in terms of consent, transparency, and the right to explanation as AI algorithms often process data in ways that are not transparent to users.
2. Algorithmic Bias and Discrimination: Existing non-discrimination laws might not explicitly cover decisions made by AI systems, which can perpetuate or even exacerbate biases if not properly managed. This includes biases in hiring, loan approvals, and law enforcement profiling.
3. Surveillance and Monitoring: The use of AI in surveillance and monitoring can lead to inconsistencies with current laws regarding individual freedoms and rights to privacy, as AI can enable much more pervasive monitoring capabilities than traditional methods.

Which regulatory option best addresses this?

A framework approach – Introducing new framework legislation to adapt existing regulatory frameworks across the economy

Please explain why.  

The framework approach is best suited to address these gaps and inconsistencies. This approach allows for flexibility to adapt and update regulations as technology evolves, ensuring that AI regulations remain relevant and effective in the face of rapid technological changes. By establishing a broad set of principles that can be adapted to specific circumstances, a framework approach can provide clear guidelines for privacy, data protection, and ethical considerations specific to AI. This approach also facilitates ongoing dialogue and adaptation, which is crucial for keeping pace with technological advancements and their implications in various sectors.