The Australian Government is now operating in line with Caretaker Conventions, pending the outcome of the 2025 federal election.

Make a submission: Published response

#239
NSW Council for Civil Liberties
26 Jul 2023

Published name

NSW Council for Civil Liberties

Upload 1

Automated Transcription

NSWCCL SUBMISSION

THE DEPARTMENT OF
INDUSTRY, SCIENCE AND
RESOURCES

SAFE AND RESPONSIBLE AI
IN AUSTRALIA –
DISCUSSION PAPER

26 July 2023
Acknowledgement of Country

In the spirit of reconciliation, the NSW Council for Civil Liberties acknowledges the Traditional
Custodians of Country throughout Australia and their connections to land, sea and community. We pay our respect to their Elders past and present and extend that respect to all First Nations peoples across
Australia. We recognise that sovereignty was never ceded.

About NSW Council for Civil Liberties

NSWCCL is one of Australia’s leading human rights and civil liberties organisations, founded in 1963.
We are a non-political, non-religious and non-sectarian organisation that champions the rights of all to express their views and beliefs without suppression. We also listen to individual complaints and, through volunteer efforts, attempt to help members of the public with civil liberties problems. We prepare submissions to government, conduct court cases defending infringements of civil liberties, engage regularly in public debates, produce publications, and conduct many other activities.

CCL is a Non-Government Organisation in Special Consultative Status with the Economic and Social
Council of the United Nations, by resolution 2006/221 (21 July 2006).

Contact NSW Council for Civil Liberties http://www.nswccl.org.au office@nswccl.org.au
Correspondence to: PO Box A1386, Sydney South, NSW 1235

2
1. EXECUTIVE SUMMARY

The NSW Council of Civil Liberties (NSWCCL) submits that the proliferation Artificial Intelligence (AI) poses significant risks to the civil rights of the Australian public. As it stands, Australia’s regulatory system fails to fully address these risks – an issue that will grow with increased use of these technologies.

The proceeding submission responds to the key questions of concern for the NSWCCL from the
Discussion Paper. In this submission the NSWCCL, recommends that:

1. A statutory office of an AI Safety Commissioner be introduced, to lead regulation and research
of new AI risks and coordinate responses of different government bodies and agencies;

2. Reform the existing patchwork of legislation that covers AI regulation, including improved
privacy protections for citizens;

3. Introduce bespoke AI regulation that adopts a risk-based approach to AI, with graduated
obligations for AI developers, deployers and users of AI according to risk. This should include:

a. Transparency requirements for all deployers of AI, which become more onerous with the
risk associated with the kind of AI;

b. Distinct and more onerous transparency requirements for public sector organisations
that use AI and ADM;

c. Prohibitions on some kinds of AI use in decision-making (differing from private and public
sectors);

d. Flexibly-defined prohibitions on AI that poses an unacceptable risk of harm; and

e. A regime that delegates specific compliance responsibilities for developers (of upstream
and downstream applications), deployers and users.

Appended to our submission are the NSWCCL’s previous submissions to the Department of Prime
Minister and Cabinet’s Digital Technology Taskforce1 and the Commonwealth Attorney-General2 on AI regulation and the Privacy Act 1988 (Cth) respectively. These submissions are relevant to the
Department’s current inquiry, and we submit that the Department should also consider the recommendations outlined therein.

1
NSWCCL, Submission to Department of the Prime Minister and Cabinet, Digital Technology Taskforce, ‘Positioning Australia as a Leader in Digital
Economy Regulation – Automated Decision Making and AI Regulation – Issues Paper’ (20 May 2022) (NSWCCL Submission to ADM and AI
Regulation Issues Paper).
2
NSWCCL Submission to the Attorney –General's Department, Privacy Act Review – Discussion Paper (9 January 2022) (NSWCCL Submission to
Privacy Act Review).

3
2. INTRODUCTION

The NSW Council for Civil Liberties (NSWCCL) welcomes the opportunity to make a submission to the
Department of Industry, Science and Resources (the Department) in regard to the Safe and Responsible
AI in Australia – Discussion Paper (the Discussion Paper).

AI poses profound risks to Australians’ human rights. They have the potential to make important decisions that affect our lives in an automated way that is less open, fair and transparent. The growth of AI, both in terms of the technology that underpins it, and the ever increasing private and public sector applications, poses new risks to rights to privacy. For instance, sensitive personal information can be scraped for use in AI training, and AI can be deployed to track people in real-time through biometric identification systems or compile comprehensive consumer profiles. A key risk here is that AI systems can reproduce biases which, when coupled with the opaque nature of their decision-making, can be difficult to identify. Given the lack of transparency in the regulatory requirements for developers and deployers of AI and Automated
Decision-Making (ADM) systems, many Australians will not be aware when they have come in contact with such systems.

Perhaps the most recent example of the pitfalls of poorly regulated ADM is the Robodebt Scheme, which highlights that the misuse of even simple kinds of ADM can lead to fatal outcomes for vulnerable
Australians. As the Royal Commission into Robodebt has found, a simple form of ADM was deployed at large scale by the Department of Social Services to calculate “overpayments” made to welfare recipients.3
Fundamental errors in the ADM led to miscalculation of welfare entitlements for thousands of vulnerable recipients and false accusations of overpayments.4 While all Australians are potentially at risk of harm from the misuse of AI and ADM, Robodebt reminds us that poorer and marginalised communities will find it more difficult to assert their rights or seek redress. The NSWCCL urges the Department to keep such citizens front of mind when drafting its recommendations. Though the Robodebt Scheme did not involve
AI, the growth of this technology (which is more complex and opaque than the simple ADM system
Robodebt used) threatens similar harms. Appropriate procedural safeguards and review mechanisms that protect such people should be the hall marks of a responsible AI regulatory framework.

As noted in the Discussion Paper, AI offers significant opportunities to Australia. But these opportunities come with risks. Absent significant reform, the proliferation of these technologies will lead to abuses of rights to privacy, equality and fairness. The NSWCCL submits urgent reform to the existing patchwork of regulations that covers AI, to fill in gaps and address emerging risks. Specifically, we call for a bespoke
AI regulation that adopts a risk-based approach, as well as the introduction of an AI Safety Commissioner that could oversee regulation in this increasingly important area.

3. FEEDBACK ON THE DISCUSSION PAPER QUESTIONS

1. Do you agree with the definitions in this discussion paper? If not, what definitions do
you prefer and why?

NSWCCL sets out its comments below on certain the definitions of: (a) Artificial Intelligence; and (b)
Automated Decision Making, as defined in the Discussion Paper.

(a) Artificial Intelligence

AI terms will be defined and used differently, depending on the context and goals – the goals of defining
AI for scientific usage differ from the goals of defining AI for the purposes of AI legislation. To the extent that the definitions in this Discussion Paper are intended to flow through to legislation, the NSWCCL

3
Royal Commission into the Robodebt Scheme (Report, 7 Jule 2023) Section 2.
4
Ibid, xxiii – xxix.

4
submits that these definitions may be too narrow in scope and may not focus enough on the impact of AI technologies.

(i) Scope

The definition of “artificial intelligence” in the Discussion Paper is as follows: “an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation”.

NSWCCL submits that this definition is amended to be: “an engineered system that generates, for a given set of human-defined objects or parameters, predictive outputs, content, forecasts, recommendations or decisions that influence real and virtual environments. AI systems are designed to operate with varying levels of automation and autonomy.”

The NSWCCL supports the focus of the definition of “AI Systems” in the Discussion Paper for definition on predictive outputs, content, forecasts, recommendations, and decisions. However, we submit that the words “content, forecasts, recommendations or decisions” should sit alongside “predictive outputs”, rather than being listed as a subset of those outputs. It is not clear even too technical experts how AI systems may develop or react, and “predictive outputs” may be too narrow. For example, large language models have shown “emergent abilities” that were not present in smaller models and thus could not be predicted simply by extrapolating from those smaller models.5

Additionally, the reference to “without explicit programming” may set too high a bar for the level of autonomy expected and could exclude AI systems with some explicit programming – rather, the definition should not include this and should recognise that AI systems can operate with varying levels of autonomy.

(ii) Impact

The Discussion Paper definition of AI Systems does not refer to the impact they can have. The NSWCCL submits that the appropriate scope and focus of AI legislation is on identifiable real-world systems with an impact on real or virtual environments.

We consider the definition of AI should not extend to all possible AI systems that may be developed, both present and future, including in the research domain.6 The NSWCCL submits that legislation that has real world application and protects against undesirable consequences – such as in safety, health or human rights – while also allowing room for innovation in research to develop trustworthy AI offers the most effective, pragmatic approach that will also not stifle innovation in a nascent industry. In this respect, the
NSWCCL submits that the Department should have regard to the following international guidance with respect to definitional focus on the impact that AI systems can have in terms of influencing real or virtual environments:

• The US National AI Initiative Act of 2020 defines the term “artificial intelligence” as “a machine-
based system that can, for a given set of human-defined objectives, make predictions,
recommendations or decisions influencing real or virtual environments. Artificial intelligence
systems use machine and human-based inputs to (A) perceive real and virtual environments; (B)
abstract such perceptions into models through analysis in an automated manner; and (C) use
model inference to formulate options for information or action”.7 This is a definition has reference
to underlying methodologies, but still refers to the impact of AI in that it “[influences] real or virtual
environments”.

5
Wei et. al, ‘Emergent Abilities of Large Language Models’, Transactions on Machine Learning Research (August 2022),
https://openreview.net/pdf?id=yzkSU5zdwD.
6
Mireille Hildebrandt, “Global Competition and Convergence of AI Law” (2022) SocArXiv 10 .
85
Ibid.
86
Ibid. See Singapore’s Personal Data Protection Act 2012 which provides that any person who suffers loss or damage directly as a result of a contravention of the Act by an organisation shall have a right of action for relief in civil proceedings in a court. See also Article 82 of the GDPR, whereby any person who has suffered material or non-material damage (such as emotional distress) as a result of a violation of the GDPR has the right to compensation.
87
Restatement (Second) of Torts (1977) (USA), § 652B and § 652D.

40
In ABC v Lenah Game Meats,88 Gleeson CJ noted that this standard was a ‘useful practical test’ which involves claimants proving whether the interference is likely to be ‘highly offensive’ to a reasonable person and is not of legitimate concern to the public.89 As such, the NSWCCL suggests adopting the US approach to a harm threshold for breaches of privacy under the Privacy Act.

Journalism carve-out

The NSWCCL acknowledges that journalistic acts and practices may clash with this proposed direct right to action.
Such a clash is already ameliorated pursuant to s 7B(4) of the Privacy Act, which exempts acts and practices of
‘media organisations’, subject to certain requirements.

This exemption for journalistic materials or news activities should be retained for the purposes of a direct right to action, which could be achieved subject to a possible defence or exception for ‘responsible journalism’. For example, in the US jurisdiction, there is an exception for the tort of invasion of privacy which is whether the information is ‘newsworthy’. In other words, if the information or facts in question are a matter of legitimate public concern, then it can be raised as a defence.90

26. A STATUTORY TORT OF PRIVACY

Proposal 26.1 (Option 1): Introduce a statutory tort for invasion of privacy as recommended by the ALRC
Report 123.

Proposal 26.2 (Option 2): Introduce a minimalist statutory tort that recognises the existence of the cause of
action but leaves the scope and application of the tort to be developed by the courts.

Proposal 26.3 (Option 3): Do not introduce a statutory tort and allow the common law to develop as required.
However, extend the application of the Act to individuals in a non-business capacity for collection, use or
disclosure of personal information which would be highly offensive to an objective reasonable person.

Proposal 26.4 (Option 4): In light of the development of the equitable duty of confidence in Australia, states
could consider legislating that damages for emotional distress are available in equitable breach of confidence.

The NSWCCL supports the introduction of a statutory tort for invasion of privacy.

NSWCCL considers that a statutory framework is necessary to ensure that the public’s expectation of privacy protection is given form. Currently, the common law has failed to give effect to a tort of privacy invasion, for which there is strong public support.

An extensive and robust statutory tort would address two main mischiefs:

• an intrusion upon seclusion (such as by physically intruding into the plaintiff's private space or by watching,
listening to, or recording the plaintiff's private activities or affairs); and

• a misuse of private information (such as by collecting or disclosing private information about the plaintiff).

A fault element of either intention or recklessness should be included in any tort. In addition, a reasonable person in the plaintiff’s position must have an expectation of privacy. Crucially, the court would need to be satisfied that the public interest in privacy outweighs any countervailing public interests,91 which could include freedom of reporting in the media and freedom of political communication and expression.

88
(2001) 208 CLR 199.
89
Ibid [42] (Gleeson CJ); Australian Law Reform Commission (n 10).
90
Samantha Katze, ‘Hunting the Hunters: AB 381 and California’s Attempt to Restrain the Paparazzi’ (2006) 16 Fordham
Intellectual Property, Media and Entertainment Law Journal 1349.
91
Recommendation 19, Digital Platforms Report, p493, and Recommendations, ALRC Report

41
Existing case law has left the door open to the creation of a common law tort of invasion of privacy (ABC v Lenah
Game Meats 208 CLR 199), yet there has been marked judicial inaction when cases arise in this area (it has been
30 years since the English case of Kaye v Robertson [1990] EWCA Civ 21 where the court ruled that privacy had been breached but there was no law under which the breach could be remedied). Such cases include:

• Cases where a misuse of information has led to the court ostensibly appearing to affirm a tort of invasion
of privacy. For example, in Jane Doe v Australian Broadcasting Corporation [2007] VCC 281, the court
awarded damages to a victim of sexual assault when her identity was publicly revealed by the ABC. This
case was unique because the tort of invasion of privacy ostensibly operated to supplement the privacy act,
not as an extension of it.

• Similarly, in Grosse v Purvis (2003) Aust Torts Reports 81-706, a victim of an egregious breach of privacy
successfully sued a tortfeasor for damages under a tort of invasion of privacy for conduct that could be
described as ‘stalking’.

• While the above two cases appear to suggest the existence of such a tort, these decisions have been
undermined by cases such as Giller v Procopets [2004] VSC 113, which declined to acknowledge the
existence of a tort of invasion of privacy because ‘the law has not developed to the point where the law in
Australia recognises an action for breach of privacy’.

• This refusal of recognition engenders outcomes like those seen Glencore International AG v Commissioner
of Taxation [2019] HCA 26, where common-sense expectations of privacy in regards to cybersecurity were
only actionable on the basis of a breach of legal professional privilege, as opposed to any kind of invasion
of privacy basis.

The history of these cases suggests common law development toward such a tort has been incremental and slow.
Introducing a statutory tort would have the effect of declaring the existence of such a tort as well as establishing
its boundaries.

The NSWCCL considers such a tort to be necessary to address the ever-growing role of the internet and social
media platforms, particularly insofar as they relate to the virtual distribution of invasive material via online
platforms. The introduction of a statutory tort will also allow private citizens to better assert their rights even
outside of the courthouse (given the existence of firm legal grounds upon which to advance letters of demand).

The NSWCCL also supports the implementation of an offensiveness threshold as it would serve to differentiate
similar fact scenarios based on how egregious the conduct (i.e. how far removed the conduct was from general
standards of privacy) was. A cultural standard, adopting a principles-based approach to such a threshold would
allow for the general principle of right to privacy to be considered, rather than an overarching rules-based or
strictly formulaic statute approach which could see innocuous or inadvertent ‘invasions’ being actionable under
such a tort.

Yours sincerely,

Michelle Falstein
Secretary
NSW Council for Civil Liberties

Contact in relation to this submission- Michelle Falstein
Email: michelle.falstein@nswccl.org.au Mobile: 0412980540

42

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.