Make a submission: Published response

#496
Electronic Frontiers Australia
15 Sep 2023

Published name

Electronic Frontiers Australia

Upload 1

Automated Transcription

Electronic Frontiers Australia Inc.
ABN: 35 050 159 188
W www.efa.org.au
E email@efa.org.au
@efa_oz

Supporting responsible AI
Department of Industry, Science and Resources: Technology Strategy Branch

13 August 2023

By web form

To the Department,

RE: Safe and Responsible AI

EFA welcomes the opportunity to comment on the Safe and Responsible AI consultation.

EFA’s submission is contained in the following pages.

About EFA

Established in January 1994, EFA is a national, membership-based, not-for-profit organisation that promotes and protects human rights in a digital context.

EFA is independent of government and commerce, and is funded by membership subscriptions and donations from individuals and organisations with an altruistic interest in promoting civil liberties in the digital context.

EFA members and supporters come from all parts of Australia and from diverse backgrounds. Our major objectives are to protect and promote the civil liberties of users of digital communications systems (such as the Internet) and of those affected by their use and to educate the community at large about the social, political, and civil liberties issues involved in the use of digital communications systems.

Yours sincerely,

Justin Warren Kathryn Gledhill-Tucker
Chair Vice-Chair
Electronic Frontiers Australia Electronic Frontiers Australia
Introduction
As we have consistently asserted in the past in response to other consultations, EFA considers that the most important aspect of responsible or ethical AI regulation is the introduction of a Federally enforceable human rights framework1.

EFA suggests that there is likely no need for technology-specific legislation. Rather, there already exists a wealth of principles-based regulation of behaviour and harm that needs to be properly enforced. In addition to existing legislation, much of the proposed legislation in the Privacy Act Review would provide a strong foundation to protect the rights of individuals.

Summary of Recommendations
1. Enforce existing technology-neutral, principles-based legislation rather than
rushing to create new, technology-specific legislation.
2. The Privacy Act should be amended to provide strong privacy protections for
individuals and groups.
3. Individual and collective rights of action should be adopted as part of a
graduated model of regulation that devolves and distributes power more widely.
4. The Federal government should coordinate with the various states and territories
to provide a uniform and harmonised regulatory framework.
5. Private organisations that act for the government should be subject to all of the
same regulations that bind the government.
6. The government should be required to compensate individuals and groups for
redress of harms caused by its failure to implement automated systems safely.
7. Individuals harmed by government systems should be entitled to exemplary
damages to incentivise the government to live up to its obligations.
8. Any risk-based framework must include a category of “unacceptable risk” that
prohibits certain applications or practices.
9. Responsible AI must be mandated through regulation rather than voluntary
principles.

1
Electronic Frontiers Australia, ‘Submission on AI Ethical Framework Consultation’

7
profit-seeking should supersede the need to protect people from algorithmically-driven harm or exploitation.

10. Do you have suggestions for:
a. Whether any high-risk AI applications or technologies should be banned
completely?
b. Criteria or requirements to identify AI applications or technologies that
should be banned, and in which contexts?

The EU AI Act provides sound recommendations for AI applications or technologies that should be placed in a category of “unacceptable risk”. These practices are considered to be such a clear threat to people’s safety, livelihood, and rights that their use should be prohibited. These practices include:

“AI systems that deploy harmful manipulative ‘subliminal techniques’;
AI systems that exploit specific vulnerable groups (physical or mental disability);
AI systems used by public authorities, or on their behalf, for social scoring
purposes;
‘Real-time' remote biometric identification systems in publicly accessible spaces
for law enforcement purposes, except in a limited number of cases.12”

We note at the time of this consultation there is a severe lack of legislation protecting individuals from the harms of biometric surveillance. Without adequate regulation, there is a risk of creating a culture of normalising surveillance, and going past a point of no return when deploying technology that is capable of capturing sensitive and immutable details of an individual. In the event of a data breach, an individual cannot change their face.

Australia’s lack of a fundamental Bill of Rights creates challenges for determining if any practices should be banned in Australian society. Unlike the EU, Australia has not yet wrestled with the thorny problem of defining the fundamental principles on which its liberal democracy should be based. We decry certain actions of foreign governments that are viewed as authoritarian or anti-democratic, and yet when those same actions are performed by Australian governments, the conduct is somehow rendered acceptable. The rule of law requires that everyone should be subject to the same standards of behaviour; “it’s okay when we do it” should not be the basis for our regulatory frameworks.

Some conduct should be prohibited because it violates the fundamental principles on which our nation is based. Australia’s challenge is that we are unable to articulate those fundamental principles with any degree of coherence.

12
https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf

8
Notably, the risk management framework proposed in the Safe and Responsible AI in
Australia report has no “unacceptable risk” rating. There are some practices that pose such a serious threat that their use should be prohibited. We support the introduction of prohibited practices such as those categorised as “unacceptable risk” in the EU legislation.

As noted in the report, algorithmic bias is a legitimate concern. AI will replicate the bias of a system on which it is trained. Unwanted bias, such as racial discrimination, cannot be remedied by the introduction of more data if the system (i.e. the underlying dataset) itself is biassed. For example, Aboriginal and Torres Strait Islander peoples make up 3% of the general population of Australia, but closer to one third of the imprisoned population13. Any AI applied to the system of incarceration or law enforcement risks further entrenching this overrepresentation of Aboriginal and Torres
Strait Islander peoples in our prison system. We therefore suggest that the existence of extreme bias in a system be taken into account when assessing the risk of AI technologies.

Recommendation: Any risk-based framework must include a category of
“unacceptable risk” that prohibits certain applications or practices.

11. What initiatives or government action can increase public trust in AI deployment
to encourage more people to use AI?

The framing of this question presupposes that more AI is inherently a good thing. The case has not yet been made. Not all innovation is useful. Not all change is worthwhile.
The onus is on those who wish to deploy new technologies to demonstrate their value and safety, and to accept the consequences if they are wrong. The reckless deployment of unproven technologies onto the public at large should be met with scepticism. Breathless techno-utopian claims should not be accepted at face value by any government that claims to value evidence-based decision making.

To assist with determining if there is real value to a technology, and that its value outweighs any costs to individuals and society collectively, the government could encourage small-scale trials under tightly controlled conditions. This will minimise the risks to Australians while helping to establish a robust evidence base that would justify further support. Such trials should require the publication of detailed findings, both positive and negative. Australians would then be able to better inform themselves of the value of technologies such as AI, and either encourage or discourage further use of public funds to support their development.

Prisoners in Australia, 2022 | Australian Bureau of Statistics. (2023, October 5).
13 https://www.abs.gov.au/statistics/people/crime-and-justice/prisoners-australia/latest-release

9
Direct support is not the only action the government could take. A robust regulatory environment that supports a just and equitable society would encourage the development of technologies that further improve Australians’ quality of life. Active steps to redistribute power and wealth would make Australia a more equal society, less prone to abuses of power by self-interested cliques.

Technological systems are socially constructed. Government choices about which technologies should be used and which will not shape the environment in which technologies develop. The choices the government makes should be based on fundamental principles about the kind of society Australia wants to be. All of its choices reflect those principles. When it chooses to favour the interests of multinational corporations and business groups over those of ordinary citizens, it is telegraphing the kind of society it thinks Australia should be.

If the government wants the public to trust it, it must first demonstrate that it is trustworthy. Recent evidence suggests it has a great deal of work to do before that will be true.

Implications and infrastructure
12. How would banning high-risk activities (like social scoring or facial recognition
technology in certain circumstances) impact Australia’s tech sector and our
trade and exports with other countries?

This question highlights the shortfalling of the proposed risk assessment model in this report. Activities such as social scoring or facial recognition are more severe than
“high-risk”; they ought to fall into a category of “unacceptable risk” as they pose such a threat to individuals that they should be outright banned. There is no amount of guardrails or supervision that can make an activity such as social scoring compatible with a liberal democracy.

This question highlights the extent to which the government is prepared to take a cold and amoral approach when discussing the rights of its citizens. It is akin to asking
“would banning the mass imprisonment of politicians impact Australia’s tech sector and our trade and exports with other countries?” without flinching. The disappointment of those keen to profit from prison expansions would not be seriously balanced against the desire of politicians to remain at large. Why, then, is the government prepared to contemplate such fundamental alterations to the nature of our society as social scoring as being somehow related to trade and exports?

Why not investigate the potential for an over-70’s Logan’s Run regime to save on the aged pension? Perhaps the local tech sector could be given a boost building miniature

10
GPS trackers to inject into the neck of every public servant? These are obviously ludicrous suggestions, and so is the question posed here.

13. What changes (if any) to Australian conformity infrastructure might be required
to support assurance processes to mitigate against potential AI risks?

We have no specific notes on this question.

Risk-based approaches
14. Do you support a risk-based approach for addressing potential AI risks? If not, is
there a better approach?

We recognise the EU Artificial Intelligence Act proposes a risk-based approach to AI legislation that is technology-neutral. A risk-based approach is challenging to implement when there is a lack of historical data from past incidents to inform risk assessments. Guesses are not evidence. The simple novelty of AI technology renders any risk-based approach fundamentally flawed as there is no basis — beyond mere speculation — on which to base a risk assessment. “She’ll be right” should not form the basis of government regulation.

Automation of a well-known process with a lengthy history of evidence supporting known-good practices is less likely to go wrong in unexpected ways. Automation of a new process with no history or evidentiary base for safety assessments would be reckless. We suggest that the latter form of automation should fail a “due diligence, expertise, and skill” probity test.

15. What do you see as the main benefits or limitations of a risk-based approach?
How can any limitations be overcome?
16. Is a risk-based approach better suited to some sectors, AI applications or
organisations than others based on organisation size, AI maturity and resources?
17. What elements should be in a risk-based approach for addressing potential AI
risks? Do you support the elements presented in Attachment C?
18. How can an AI risk-based approach be incorporated into existing assessment
frameworks (like privacy) or risk management processes to streamline and
reduce potential duplication?
19. How might a risk-based approach apply to general purpose AI systems, such as
large language models (LLMs) or multimodal foundation models (MFMs)?
20. Should a risk-based approach for responsible AI be a voluntary or self-regulation
tool or be mandated through regulation? And should it apply to:
a. public or private organisations or both?
b. developers or deployers or both?

11
Voluntary self-regulation grants too much discretion and undue faith in the hands of technology developers and deployers. Frameworks such as the Australian AI Ethics
Principles are admirable, but fundamentally ineffective and unenforceable.
Organisations in both the public and private sector cannot be trusted to act in the best interests of individuals, especially in a capitalist system that prioritises the pursuit of profit.

This is not to say that a new suite of legislation is required to effectively moderate the development and deployment of AI. Rather, we ought to reflect on the effectiveness of existing legislation, and ensure regulators (such as the OAIC) are sufficiently funded and empowered to enforce such legislation. If current legislation is not effective at protecting individuals from the real world harms that are occurring today, we ought to understand why and remedy these failures as a priority.

Recommendation: responsible AI must be mandated through regulation rather than voluntary principles.

12

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.