Make a submission: Published response

#504
Law Council of Australia
15 Sep 2023

Published name

Law Council of Australia

Upload 1

Automated Transcription

Safe and responsible AI in
Australia
Department of Industry, Science and Resources

17 August 2023

Telephone +61 2 6246 3788
Email mail@lawcouncil.au
PO Box 5350, Braddon ACT 2612
Level 1, MODE3, 24 Lonsdale Street,
Braddon ACT 2612
Law Council of Australia Limited ABN 85 005 260 622 www.lawcouncil.au
Table of Contents
About the Law Council of Australia ............................................................................... 3
Acknowledgements ........................................................................................................ 4
Executive Summary ........................................................................................................ 5
Definitions ....................................................................................................................... 6
Governance and regulation............................................................................................ 7
Current state of play ...................................................................................................... 7
Approaching additional regulation ................................................................................. 7
Developing further regulation ........................................................................................ 9
A multifaceted approach ............................................................................................... 11
Product stewardship and transparency......................................................................... 11
Standalone ‘AI Act’ or expansion of current regulation ..................................................12
Establishment of a dedicated AI taskforce ....................................................................13
Industry-specific regulation ...........................................................................................13
Monitoring and review ..................................................................................................15
Regulation of high-risk technology and application ...................................................16
Biometrics and the use of automated facial recognition technology ..............................17
Social scoring ...............................................................................................................18
Fakes and scams .........................................................................................................19
International coherence and consistency ....................................................................21
Australia’s place in the global economy ........................................................................23
Public sector uses of AI ................................................................................................23
Concerns around ‘automated decision making’ ..........................................................24
The importance of data .................................................................................................28
Human rights and AI ......................................................................................................29
Algorithmic bias ............................................................................................................29
Competition and consumer issues...............................................................................31
Consumer law issues ...................................................................................................31
Competition law issues.................................................................................................35
Other areas for consideration .......................................................................................36
Increasing public trust in the use of AI ..........................................................................36
Ethical responsibilities and Al software testing..............................................................37
Privacy .........................................................................................................................37
Supporting compliance by small to medium enterprises ...............................................38
Infrastructure and AI .....................................................................................................38
Consideration of intellectual property ...........................................................................39
Justice system and the legal sector ..............................................................................39

Safe and responsible AI in Australia Page 2
About the Law Council of Australia
The Law Council of Australia represents the legal profession at the national level, speaks on behalf of its
Constituent Bodies on federal, national and international issues, and promotes the administration of justice, access to justice and general improvement of the law.

The Law Council advises governments, courts and federal agencies on ways in which the law and the justice system can be improved for the benefit of the community. The Law Council also represents the
Australian legal profession overseas, and maintains close relationships with legal professional bodies throughout the world. The Law Council was established in 1933, and represents its Constituent Bodies:
16 Australian State and Territory law societies and bar associations, and Law Firms Australia. The Law
Council’s Constituent Bodies are:

• Australian Capital Territory Bar Association
• Law Society of the Australian Capital Territory
• New South Wales Bar Association
• Law Society of New South Wales
• Northern Territory Bar Association
• Law Society Northern Territory
• Bar Association of Queensland
• Queensland Law Society
• South Australian Bar Association
• Law Society of South Australia
• Tasmanian Bar
• Law Society of Tasmania
• The Victorian Bar Incorporated
• Law Institute of Victoria
• Western Australian Bar Association
• Law Society of Western Australia
• Law Firms Australia

Through this representation, the Law Council acts on behalf of more than 90,000 Australian lawyers.

The Law Council is governed by a Board of 23 Directors: one from each of the Constituent Bodies, and six elected Executive members. The Directors meet quarterly to set objectives, policy, and priorities for the Law Council. Between Directors’ meetings, responsibility for the policies and governance of the
Law Council is exercised by the Executive members, led by the President who normally serves a one-year term. The Board of Directors elects the Executive members.

The members of the Law Council Executive for 2023 are:

• Mr Luke Murphy, President
• Mr Greg McIntyre SC, President-elect
• Ms Juliana Warner, Treasurer
• Ms Elizabeth Carroll, Executive Member
• Ms Elizabeth Shearer, Executive Member
• Ms Tania Wolff, Executive Member

The Chief Executive Officer of the Law Council is Dr James Popple. The Secretariat serves the Law
Council nationally and is based in Canberra.

The Law Council’s website is www.lawcouncil.au.

Safe and responsible AI in Australia Page 3
Acknowledgements
The Law Council acknowledges the assistance of the following Constituent Bodies in preparing this submission:

• Law Society of New South Wales;
• Queensland Law Society;
• Law Society of South Australia; and
• Law Institute of Victoria.

The Law Council is also grateful for the contribution of its Futures Committee and the following Committees of its Business Law Section:

• Competition and Consumer Committee;
• Digital Commerce Committee;
• Intellectual Property Committee;
• Media and Communications Committee; and
• Privacy Law Committee.

Safe and responsible AI in Australia Page 4
Executive Summary
1. The Law Council welcomes the opportunity to provide a submission to the
Department of Industry, Sciences and Resources (the Department) in response to
the Discussion Paper on Safe and Responsible AI in Australia (Discussion Paper).1

2. Artificial intelligence (AI) has the potential to deliver significant opportunities and
benefits across the economy and society more broadly, and can be expected to
cause disruption and innovation in many key industries. The Law Council considers
this to be a timely opportunity to consider reform in response to advancements in
technology and the implications for the Australian public.

3. Existing AI governance mechanisms in Australia are largely voluntary and rely on
general regulatory frameworks. In the Law Council’s view, the significant risks
posed by the use of Al justify a strengthened and precautionary approach to Al
regulation, where there is evidence that existing laws and regulations are insufficient
to address the issues and harms arising. Further regulation should be multifaceted.
It should include the expansion of current legislation and, where necessary, new
targeted legislation, not just a soft law approach of a voluntary code. However, the
Law Council considers that the Australian Government should not seek to explicitly
regulate AI via a comprehensive ‘AI Act’.

4. The Law Council suggests the establishment of a dedicated interdepartmental
taskforce to:

(a) provide detailed, technical advice and guidance;

(b) consider international developments;

(c) provide a forum for collaboration / information sharing and consultation, and

(d) coordinate consideration of AI regulation with state and federal agencies.

5. In the short term, the Australian Government should consider the regulation of
high-risk AI technology and applications. In particular, enhanced regulation of the
collection and use of biometric information (such as the use of automated facial
recognition technology) and ‘social scoring’ practices, and options to reduce the risk
of people being misled by AI-generated fakes and scams.

6. Further, following the release of the Final Report of the Royal Commission into the
Robodebt Scheme,2 comprehensive regulatory reform is required to ensure that the
use of automated decision making (ADM), including by the Australian Government,
is transparent, capable of review, and consistent with administrative law principles.

7. The Law Council does not, at this stage, advocate the adoption of any particular
international regulatory model. Australia has an opportunity to assess the regulatory
models adopted by other jurisdictions and to determine an optimal and bespoke
approach for Australia that reflects the nuances of Australia’s pre-existing
constitutional and regulatory framework, and different local market environment.

1 Department of Industry, Sciences and Resources (Cth), Safe and Responsible AI in Australia (Discussion
Paper, June 2023) (‘Discussion Paper’).
2 Final Royal Commission into the Robodebt Scheme (Final Report, July 2023).

Safe and responsible AI in Australia Page 5
Definitions
8. Given the inherently global nature of AI, the Law Council supports legislative and
regulatory definitions of AI that are broadly consistent with existing approaches
adopted in the United Kingdom (UK) and the European Union (EU).

9. Whilst there is no universally accepted definition of AI, the UK describes AI by
reference to a combination of adaptivity and autonomy, being ‘two characteristics
that generate the need for a bespoke regulatory response’.3 By comparison, the
European Parliament recently proposed a refined definition of AI,4 in alignment with
the Organisation for Economic Cooperation and Development’s Principles for
responsible stewardship of trustworthy AI (OECD Principles)5 and the United States
National Institute of Standards and Technology’s Artificial Intelligence Risk
Management Framework (NIST AI Risk Management Framework).6

10. The amended definition included in the Draft Artificial Intelligence Act recently
adopted by the European Parliament for negotiation with EU member states
(Draft EU AI Act) refers to AI as ‘a machine-based system that is designed to
operate with varying levels of autonomy and that can, for explicit or implicit
objectives, generate outputs such as predictions, recommendations, or decisions
that influence physical or virtual environments’.7

11. Without alignment on the definition of AI, major trading partners may be deterred
from engaging with Australian entities and consumers on significant economic and
commercial activities.

12. The Law Council acknowledges that opportunities presented by certain AI models,
including large language models (LLMs) and multimodal foundation models
(MFMs), are almost impossible to forecast accurately over the next decade,8
creating difficulty in crafting a suitably comprehensive definition. To mitigate this,
any proposed definitions should be technologically neutral and sufficiently flexible to
accommodate fast-paced technological developments.

13. The definitions could be enhanced with the inclusion of a definition relating to AI
robotic systems. Deployment of autonomous AI, without real-time human
supervision, such as in robot control, should be within the scope of any regulatory
regime.

14. Moreover, any proposed definitions should be subject to regular review and be
updated as necessary to reflect the changing understanding, knowledge, and
application of AI systems.

3 Department of Science, Innovation and Technology (UK), A pro-innovation approach to AI regulation (March
2023) 22.
4 European Parliament, Proposal for a Regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206) as amended by European Parliament, Amendments adopted by the
European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the
Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (P9_TA(2023)023) (‘Draft EU AI Act’).
5 Organisation for Economic Cooperation and Development, Recommendation of the Council on Artificial

Intelligence (OECD/LEGAL/0449) (Report, May 2019).
6 National Institute of Standards and Technology, Department of Commerce (US), Artificial Intelligence Risk

Management Framework (AI RMF 1.0) (January 2023).
7 European Parliament, Draft EU AI Act, art 3(1) (as amended by Amendment 165).
8 Discussion Paper, 9, citing Genevieve Bell, et al, Australian Council of Learned Academies, Rapid Response

Information Report: Generative Ai (March 2023).

Safe and responsible AI in Australia Page 6
Governance and regulation
Current state of play
15. Innovation in AI technologies is at an early stage of development, and it is near
impossible to ascertain how rapidly evolving AI technologies will impact upon the law
and Australia’s existing regulatory approaches.

16. Existing Australian AI governance mechanisms have been largely voluntary and rely
on general regulatory frameworks embedded in consumer, competition, corporate,
criminal, and privacy laws (among others).9 Australia’s Artificial Intelligence Ethics
Framework (AI Ethics Framework) provides voluntary principles for designing and
implementing AI responsibly.10 The principles align with the OECD Principles and
are intended to supplement existing regulations and practices.

17. Across international jurisdictions, a variety of alternative approaches have been
explored, including, but not limited to, bans, the creation of standalone AI laws and
specialist tribunals. However, single mechanisms such as these provide inadequate
protection, and greater cohesiveness and further regulatory and governance
responses will be required to mitigate emerging risks.

Approaching additional regulation
18. The Law Council supports both increased governance initiatives and—where there
is evidence that existing laws and regulations are insufficient to address the issues
and harms arising—enhanced regulatory measures to ensure that Al is developed,
implemented, used, and made available safely in both the public and private
sectors.

19. Any governance initiatives or enhanced regulation must balance the worthy
objectives of encouraging Australia as a leader in developing and implementing AI
applications to the benefit of Australians (recognising the dominance of
foreign-owned and headquartered technology providers) while providing holistic
protection from harm.

20. Any overarching regulatory framework should be principles-based, people-centric
and underpinned by a range of supporting regulatory mechanisms.11 Systems must
be ethical, lawful, and technologically robust, rigorously reviewed and appropriately
governed, incorporating appropriate risk allocation (taking into account relative
bargaining power of the entities involved), to enable ongoing monitoring and
reporting mechanisms, as well as avenues to contest decisions.

21. Whilst jurisdictions around the world have been cautious to avoid adopting
‘a heavy-handed’ approach to Al regulation which may potentially stifle innovation,
analogous approaches taken with respect to the development of social media
platforms and protection of privacy suggest that a precautionary approach ought to
be applied. This is particularly important for ill-defined and rapidly evolving
technologies, the parameters of which are unknown. Citizens are already at a
significant informational disadvantage in terms of Al-related systems, Al-related data,
and associated infrastructure access and often, lack the resources to challenge

9 Discussion Paper, 10.
10 Department of Industry, Sciences and Resources (Cth), Australia’s Artificial Intelligence Ethics Framework
(Web Page, 7 November 2019).
11 World Intellectual Property Organisation, WIPO Conversation on Intellectual Property (IP) and Artificial

Intelligence (AI) (Report, Third Session, November 2020) 4.

Safe and responsible AI in Australia Page 7
alleged Al abuses including ADM. In this regard, the Robodebt Scheme clearly
emphasised the importance of transparency and Al-creator/user accountability to
civil society in a context of increased adoption of ADM and the training and use of Al
related technologies more broadly.12

22. There are laws that already apply to AI.13 The growth of the digital economy, and
changes to the way in which businesses engage with their supply chain and end
users, is driving regulators in several specialist areas to consider the effectiveness
of current regulatory frameworks. Building on Australia’s existing regulatory
regimes, further regulation of AI is required together with meaningful enforcement
powers.

23. The development of safe and responsible AI in Australia requires an interoperable
framework that will enable Australian organisations to innovate. It must also provide
sufficient safeguards, including against infringement of others’ pre-existing rights or
assets, such as intellectual property rights, confidential information and personal
information. The framework should be flexible, scalable, and future-proof. It is also
critical that a harmonised approach is taken across regulation to ensure consistency,
avoid duplication, and avoid fragmentation of regulation.

Flexibility

24. The framework should build upon, and be adapted to, existing processes that
Australian organisations have in place—for example, enterprise risk frameworks and
methodologies, software and other technology project assessment and
management frameworks and methodologies, privacy and security by design and
default, and privacy risk assessment. It must also be cognisant of existing laws,
including as examples, privacy, data security, product safety, consumer protection,
and human rights-based laws such as anti-discrimination statutes.

Scalable

25. Since data and provision of cloud-based services have no geographic boundaries,
the framework must be scalable. As different regulatory models in diversely
regulated jurisdictions apply at various points in a data-driven service supply chain,
AI regulatory initiatives should be determined with reference to evolving regulation in
other jurisdictions (see discussion beginning at paragraph 83 below). These models
will impact both links in the AI supply chain and Australia’s assessment of the extent
of the impact and effectiveness of that regulation to achieve safe and responsible AI
at the Australian end of that supply chain. The framework needs to take into
consideration and leverage international initiatives that can facilitate responsible and
accountable flows of data, and cross-border business models that enable Australian
businesses to expand and compete globally and cost-effectively.

Future-proof

26. A future-proof framework will enable Australian organisations, as adopters of AI, as
producers of AI and as detectors of AI-misappropriation.

27. The Law Council supports adaptability through the adoption of principles-based
legislation, providing for legal responsibility and substantive accountability of entities
across the AI service supply chain. This should include measures to ensure that
entities have appropriate incentives to adopt risk of harms assessments, mitigation
12 Final Royal Commission into the Robodebt Scheme (Final Report, July 2023) (Section 4: Automation and data matching).
13 Discussion Paper, 10.

Safe and responsible AI in Australia Page 8
and management of residual risks, supported by a risk management framework.
The risk management framework should broadly align to existing risk frameworks,
as to some degree, the fundamental risks remain the same, only these risks are
amplified together with the propensity for false information.

28. The Law Council supports technology neutrality as a key principle to underpin any
new regulation of AI. Given the rate of change in technological development, the
objective should be to avoid technology- or platform-specific laws that become
redundant or only partially effective. Under such a regulatory framework, the
question of how the analysis and/or decision is made or delivered is irrelevant, and it
is the act of the analysis and/or decision made or delivered itself that is being
regulated.

Developing further regulation
29. In approaching the development of further regulation, the Law Council supports
adherence to the following principles:14

(a) establishing a case for action before addressing a problem. The Discussion
Paper references some possible harms to individuals arising from AI, but
further consideration of AI technologies, AI applications and potential harms
via a market study or other inquiry would help identify problems and design
solutions that are targeted at those problems;

(b) considering a range of feasible alternative policy options and assessing their
benefits and costs;

(c) adopting the policy options that generate the greatest net benefit for the
community;

(d) in accordance with the Competition Principles Agreement,15 legislation should
not restrict competition unless it can be demonstrated that:

(i) the benefits of the restrictions to the community outweigh the costs; and

(ii) the objectives of the regulation can only be achieved by restricting
competition;

(e) providing effective guidance to relevant regulators and regulated parties in
order to ensure that the policy intent and expected compliance requirements
of the regulation are clear;

(f) ensuring that regulation remains relevant and effective over time;

14 See further, Law Council of Australia, Submission to Digital Technology Taskforce, Department of the Prime
Minister and Cabinet, Positioning Australia as a leader in digital economy regulation – Automated decision making and AI regulation – Issues Paper (3 June 2022) 7-8 .
66 European Parliamentary Research Service, Artificial intelligence liability directive (Briefing – EU Legislation in Progress, February 2023).

Safe and responsible AI in Australia Page 30
Competition and consumer issues
Consumer law issues
132. The Discussion Paper acknowledges that the potential risks of AI are currently
governed by both general regulations and sector-specific regulations, including the
Australian Consumer Law (ACL)67 and competition law.

133. There are currently general and specific provisions in the ACL which seek to provide
mechanisms to protect businesses and consumers, and which would appear to be
capable of applying in various AI contexts. This includes:

(a) Section 18 of the ACL which prohibits businesses from engaging in misleading
or deceptive conduct in trade or commerce. Additionally, sections 29 and 33–
36 prohibit businesses from engaging in various forms of false, misleading, or
deceptive conduct in connection with the supply of goods or services. As
noted in the Discussion Paper, the prohibitions on misleading conduct have
been successfully used to pursue misleading algorithmic decision making.68

(b) Sections 20 and 21 of the ACL which prohibit businesses from engaging in
unconscionable conduct when dealing with other businesses or their
customers. Unconscionable conduct means conduct that is so harsh it goes
against good conscience.

(c) Section 23 of the ACL provides that a term of a standard form consumer
contract or small business contract that is unfair is void. Recent amendments
to the unfair contract terms regime will make the use of unfair contract terms
illegal and introduce significant penalties for breach of these provisions, from
9 November 2023. These provisions could apply to terms and conditions on
which AI applications are offered to consumers or small businesses (noting the
broader definition of small business that will apply from 9 November 2023),
preventing terms that are unfair (such as limitations on liability or exclusion of
liability for losses caused or contributed by the provider’s negligence or
recklessness, or opaque requirements to consent to data use/disclosure that
may cause detriment to the user).

(d) Sections 51–63 of the ACL contains consumer guarantees that apply to the
supply of goods and services. These provisions would apply to AI applications
to the extent they are goods or services supplied to a consumer in Australia.
Relevantly:

(i) the definition of ‘goods’ in subsection 2(1) includes ‘computer
software’;69

(ii) the definition of ‘services’ in subsection 2(1) includes any ‘benefits…that
are, or are to be, provided, granted or conferred in trade or commerce’;

(iii) under section 3, a person is a consumer if they have acquired goods for
less than $100,000 (or a greater amount prescribed) or the goods were

67 Competition and Consumer Act 2010 (Cth) sch 2 (‘Australian Consumer Law’).
68 See, eg, Australian Competition and Consumer Commission v Trivago N.V. [2020] FCA 16 (20 January
2020).
69 The application of the ACL to downloadable computer software was considered in Australian Competition and Consumer Commission v Valve Corporation (No 3) [2016] FCA 196.

Safe and responsible AI in Australia Page 31
of a kind ordinarily acquired for personal, domestic or household use or
consumption;

(iv) the consumer guarantees relating to acceptable quality, fitness for
purpose, supply of goods by description and due care and skill would
appear capable of applying to at least some AI applications; and

(v) as noted in the Discussion Paper, the ACL contains various remedies for
breach of the consumer guarantees.

(e) Provisions of Part 3-3 of the ACL (relating to safety of consumer goods and
product related services) would also appear to be capable of applying to some
AI applications. ‘Consumer goods’ are goods either intended to be used, or of
a kind likely to be used, for personal, domestic or household use or
consumption (see subsection 2(1) of the ACL).

(f) Part 3-4 of the ACL (relating to information standards) would also appear to be
capable of applying to some AI applications. This part is not limited to
consumer goods or services supplied to consumers. Section 134 empowers
the Commonwealth Minister to make information standards which can require
the provision of specified information. The Part outlines the consequences of
not adhering to such standards.

(g) Part 3-5 of the ACL (relating to liability of manufacturers for goods with safety
defects) could also be applicable in some circumstances—for example, if the
AI application in question is a ‘good’ and it has a ‘safety defect’ that causes
injury to an individual. ‘Manufacturer’ has a broad meaning under section 7 of
the ACL, which would appear to be capable of applying to suppliers and
developers of AI applications. Goods have a safety defect if their safety is not
such as persons generally are entitled to expect (section 9 of the ACL).
However, it is not a requirement that goods are entirely free from risk.

134. The Law Council considers the Australian Government should consider whether
existing competition and consumer laws may be adequate to deal with any conduct
of concern or harms arising from AI applications. It is difficult to do this in the
abstract.

135. While the Discussion Paper outlines some potential challenges for or concerns from
AI, there is a lack of detail about AI technologies and applications that are currently
in use in Australia and the specific concerns or harms that are occurring or likely to
emerge. A market study by the Australian Competition and Consumer Commission
(ACCC) would be one way to build a deeper understanding of the current landscape
and issues. This would provide the Government with sufficient information to
properly consider whether there are specific harms or serious risks that need to be
addressed through additional regulation. While the terms of reference for the Digital
Platform Services Inquiry appear broad enough to capture some AI applications (for
example, AI used in general search services or social media),70 a market study
covering the broader AI sector, that does not require the ACCC to gather information
and produce a report within a six-month timeframe, would be preferable in this
context.

70 Competition and Consumer (Price Inquiry— Digital Platforms) Direction 2020 (Cth).

Safe and responsible AI in Australia Page 32
136. Any additional regulations to address consumer (or competition) issues arising from
AI, should be carefully consider the existing and emerging regulatory landscape to
ensure consistency and avoid duplication or fragmentation of regulation.

137. For example:

(a) The Government is continuing to explore multiple law reform proposals and
other initiatives that have overlapping implications and relevance to the digital
economy, including digital platform rules proposed by the ACCC, the exposure
draft of the Communications Legislation Amendment (Combatting
Misinformation and Disinformation) Bill (Misinformation Bill—discussed
further below), broader consultation on the implementation of an
economy-wide prohibition against unfair trading practices, and Government
initiatives in relation to Digital Identity. Other Government initiatives relevant to
AI are outlined in Attachment A of the Discussion Paper.

(b) The Government is yet to release its response to the ACCC’s digital platform
regulation recommendations. If the Government adopts the ACCC’s
recommendations, there may be scope to consider the application of any
digital platform regulatory framework to AI applications.

(c) The Government is currently consulting on the Misinformation Bill, which
contains a broad definition of digital platforms that could cover AI services.
The Bill enables the Minister to, by legislative instrument, upon consultation
with the ACMA, specify the addition of a new subcategory of digital platform
service.

(d) The Government consulted on options aimed at improving the effectiveness of
the consumer guarantee and supplier indemnification provisions under the
ACL between December 2021 and February 2022. The consumer guarantee
monetary thresholds were increased from $40,000 to $100,000, among other
changes. The ACCC is advocating for the ACL to be amended further to
enhance the consumer guarantees protections—relevantly, to make it a
contravention of the law:

(i) for businesses to fail to provide a remedy for consumer guarantees
failures, when they are legally required to do so, and

(ii) for manufacturers to fail to reimburse suppliers for consumer guarantees
failures that the manufacturers are responsible for.

138. Noting the above examples, a concerted effort is critical to avoid fragmentation in
the various Australian reform processes to reduce uncertainty and unintended
consequences for those subject to multiple regulatory frameworks. The Law Council
is of the view that a complementary and holistic approach is important for improving
transparency for consumers and small businesses as well as for regulatory
consistency.

Safe and responsible AI in Australia Page 33
How will the law treat erroneous outputs?

139. The Discussion Paper notes that inaccuracies from AI models can ‘create many
problems’.71 The listed examples include unwanted bias and misleading or entirely
erroneous outputs. The latter example warrants further consideration due to some
relevant legal frameworks and principles seemingly being inadequate to address such
instances.

140. The Law Council provides the following example to demonstrate the current
uncertainty in this regard:

Assume, for example, that you enter into a motor vehicle insurance
policy with an insurer. The insurer does not deal with you directly but
rather through its website, where you have the option of selecting from a
range of policies. Once you make a selection, the insurer generates an
autonomous smart contract (comprising the chosen policy) and
automates its performance through a blockchain network. Your
premiums are paid in cryptocurrency out of your digital wallet, rather
than a traditional nominated bank account. The smart contract is coded
with complex algorithmic processing capacity. Based on your input on
the proposal form, it is able to develop a risk profile for you and
determine what your premium and other conditions should be (if any).
You will be offered insurance if you are deemed a ‘reliable driver’. Now
assume that the software underpinning the smart contract determines
that you are a ‘reliable driver’, not on the basis of your driving history but
on the basis, for example, of your social or professional achievements
(which it has discovered through an internet trawl). In other words, the
AI-driven smart contract has determined that you are a reliable driver
where a human insurance agent conducting a more nuanced
assessment would not have done so.

Could the insurer claim that the AI-driven smart contract had made a
‘mistake’ and that the mistake doctrine in contract law applied so as to
deny the enforceability of the insurance agreement? The answer is
legally unclear. It is not certain if the mistake doctrine could apply
because the parties have not been ‘mistaken’ in the truest sense as to
the intended effect of their accord. The smart contract was coded to
make decisions and did so. The fact the decision is unintended by the
insurer, entirely undesirable, and irrational in the sense that no rational
human actor would have made the same decision through the
organically intuitive human decision-making process is ostensibly
irrelevant under the various existing categories of legal ‘mistake’. This
result seems absurd given no reasonable insurer would ever have
meant to offer insurance to an untrustworthy party. There are significant
doubts as to how the mistake doctrine could address the actions of
errant AI technologies such as this.72

141. To the extent that AI technologies constitute computer software, they would be
recognised as ‘goods’ for the purposes of the ACL, as the definition of goods under
the ACL expressly extends to computer software. The position, however, is less
certain for Sale of Goods legislation enacted in the states and territories, where
‘goods’ do not expressly extend to computer software. For example, there is NSW

71
Discussion Paper, 7.
See Mark Giancaspro, ‘“I, Contract”: Evaluating the Mistake Doctrine's Application Where Autonomous
72

Smart Contracts Make “Bad” Decisions’ (2022) 45(1) Campbell Law Review 53.

Safe and responsible AI in Australia Page 34
Supreme Court authority suggesting that software is not a good, under the Sale of
Goods Act 1923 (NSW), if in the form of an intangible download.73 Where, however,
the good is packaged in a storage device of some kind, it could constitute a good.74
It is conventional for AI software to be sold and distributed through websites or via
downloads. While it is likely that downloadable AI software is a ‘good’ for the
purposes of the ACL, it is questionable whether such software would fall within the
ambit of state and territory Sale of Goods legislation.

142. There is significant merit in the Government clarifying the boundaries of liability in
the foregoing situations and others where AI technologies do not perform as
anticipated and loss results.

Competition law issues
143. Widespread access to AI tools has the potential to increase innovation and
competition in many markets, including by Australian and international firms. This is
especially important in a local context, where Australia has been described by the
OECD as performing poorly in its use of data-driven tools, such as AI and data
analytics.75 However, the level of regulation imposed in respect of AI tools can itself
affect competition and, in turn, outcomes in markets.

144. It is important to note the distinction between innovation in AI tools and innovation
using AI tools. Innovation in AI tools refers to the development and improvement of
the actual AI technologies, algorithms, and frameworks. On the other hand,
innovation using AI tools refers to the creative and novel applications of existing AI
technologies in various industries and sectors.

145. Most directly, regulation of the use of AI tools can affect innovation and competition
across the economy, because firms may face limitations and constraints in
deploying AI technologies to enhance their products, services, and processes.
Stricter regulations may increase the cost and complexity of implementing AI tools,
making it more challenging for firms to adopt them effectively. Consequently, firms
may be less certain of generating a return on investment from innovative application
of AI tools, and therefore may be less incentivised to invest. Australian businesses,
in particular, may accordingly find it more difficult to compete internationally if
Australian-specific regulation puts them at a relative disadvantage to international
firms.

146. Regulation introduced in respect of AI tools is likely to include some fixed cost
imposed on firms (that is, a cost that does not vary with output). To the extent that
these fixed costs become sunk, they can represent a barrier to entry to new firms.
Large incumbent firms may have a greater ability to recover those fixed costs over a
larger range of output. In other words, larger incumbent firms may be favoured by a
larger regulatory burden. This effect would be compounded if Australia adopted a
significantly different regulatory framework than those being developed
internationally, especially because product innovation implemented by Australian
businesses tends to rely on diffusion of knowledge and technology (as opposed to
new-to-the-world, novel innovation).76

73 See, eg, Gammasonics Institute v Comrad Medical Systems [2010] NSWSC 267.
74 See, eg, Toby Constructions Products v Computer Bar Sales (1983) 2 NSWLR 48.
75 See, eg, Productivity Commission, Advancing Prosperity: 5-year Productivity Inquiry report (17 March 2023) vol 2, 51.
76 Productivity Commission, Advancing Prosperity: 5-year Productivity Inquiry report (17 March 2023) vol 5, 8.

Safe and responsible AI in Australia Page 35
147. A reduction in competition resulting from a large or discriminatory regulatory burden
would be consistent with a reduction in welfare-enhancing innovation, as firms
jockey to compete with each other. Ultimately, striking a balance between an
appropriate level of regulation and its effect on innovation and competition is
important for enhancing the welfare of Australians. Different scales of regulation
could assist with striking an appropriate balance although the risks with creating an
uneven playing field would need to be considered. The Productivity Commission
notes that innovations with lower risks of harm can benefit from ‘regulatory
sandboxes’, whereas more complex systems that may have greater risks of harm
can benefit from advance preparation of detailed regulatory frameworks.77 This is
consistent with the Government’s approach of adopting a three-tiered system to
classify AI tools as low, medium or high risk. Consistent with Government’s guide to
policy impact analysis, Government should ensure that it considers impacts on
competition, innovation, and consumers when it assesses costs and benefits of
regulation.78

148. Put simply, the potential benefits to Australian businesses and consumers arising
from the application of innovative AI tools (including effects on competition) are
significant, and Government should ensure that these benefits are considered when
assessing regulatory tools for each aspect of AI.

Other areas for consideration
Increasing public trust in the use of AI
149. The Discussion Paper notes adoption rates of AI across Australia remain relatively
low, likely due to low levels of public trust and confidence in AI technologies and
systems.79 Concerns about AI have been intensified by AI being used in violation of
human rights (including privacy), where manipulative tactics have been used, or to
reinforce discrimination.80

150. By way of example, the Robodebt Scheme saw the Australian Department of
Human Services (now Services Australia) automate its interaction with Centrelink
customers via the PAYG program.81 The inaccuracies and inequities of the scheme
caused a corrosion of public trust in the use of AI, as well as Government and its
institutions, and significantly undermined public trust in Government administration.

151. In response to these valid concerns, the Law Council suggests the following
initiatives or actions:

(a) greater transparency regarding the use of AI (including by Government);

(b) additional regulatory and governance responses; and

(c) further investment in education and research.

77 Ibid, 10.
78 Department of the Prime Minister and Cabinet (Cth), The Australian Government Guide to Policy Impact
Analysis (17 February 2023).
79 Discussion Paper, 3.
80 KPMG Australia and The University of Queensland, Trust in Artificial Intelligence: Australian Insights

(Report, October 2020) 2.
81 Royal Commission into the Robodebt Scheme (Final Report, July 2023).

Safe and responsible AI in Australia Page 36
152. In addition to regulatory initiatives, the Law Council highlights the necessity of
education to encourage trust and as a risk mitigation strategy to support responsible
AI use and development in Australia.

153. Public engagement forums which facilitate dialogue between AI experts and the
public in an easy-to-understand way may assist. Information campaigns, public
lectures, and partnerships with schools and universities could also be used. An
educational approach will produce a more informed public, better able to understand
and participate in discussions about AI and its ethical use.

Ethical responsibilities and Al software testing
154. There are three main actors in the use of Al:

• manufacturers of the Al software;

• data aggregators; and

• end users (public and private sector entities) of the Al deployed.

155. Each of these parties have ethical responsibilities that must be considered.

156. Some contributors to this submission have suggested there should be a gatekeeper
approach before Al software is released. The Law Council understands that Al often
involves three sub technologies: a large data repository (which is the large language
model), the inference engine (which is the analyser), and a set of sub technologies
based on statistical analysis to create results and a feedback loop (which is the
learning structure to advance the knowledge base of the Al engine). If the data is
flawed at any stage of this process, greater errors will be produced in the results.

157. At the launch of the Robodebt scheme, the system already had demonstrated signs
of inadequacy.82 While the automated system displayed a litany of inaccuracies in
its limited release stage in July 2016, the Scheme was fully rolled out in September
2016 ‘although no proper evaluation of the pilot or manual program had taken place
and there were a number of unresolved problems’.83 The Robodebt scheme
exposes the dangers of poorly designed automation, and failure to amend
automation errors in the early stages of the process.

158. The car industry provides a useful analogy and potential template for Al pre-release:
vehicles are not released to the public until there has been extensive design, testing
and approvals. Based on this model, consideration could be given to requiring AI
technology and software to be subject to formalised testing and transparency
requirements. Currently, with the exception of safety critical software deployed in
aviation and automated vehicles, most Al software is not subject to testing in this
way. However, for this to be effective, there needs to be a clear understanding as to
what Al software comprises. It is important to emphasise that a highly specialist and
independent entity would be required to devise and administer any testing regime.

Privacy
159. In considering the further development of Australia’s privacy laws, compared to that
of the European General Data Protection Regulation (GDPR), it is important to note
that Australia does not have a federal Bill of Rights to support the jurisprudence that

82 Royal Commission into the Robodebt Scheme (Final Report, July 2023) vol 2.
83 Royal Commission into the Robodebt Scheme (Final Report, July 2023) vol 2, xxvi.

Safe and responsible AI in Australia Page 37
underlies how the GDPR is interpreted and applied in European courts.84 The
GDPR is given a more extensive and protective application than Australia’s privacy
laws because European courts give effect to human rights jurisprudence when
interpreting the GDPR. Without similar rights-based jurisprudence in Australia, it is
particularly important that Australian governments and their agencies are
demonstrably data trustworthy, and remain accountable for the data they collect,
use, disclose and process.

160. In the Law Council’s view, risk management and any assessment developed as part
of the proposed framework should consider existing requirements and processes
under the Privacy Act, such as privacy impact assessments. Such considerations
are important from the perspective of a co-ordinated and holistic regulatory
approach and will assist in limiting the compliance burden on organisations.
A sensible approach to AI regulation is to ask whether rules that restrict or prohibit
particular uses of AI, or that mandate application of a particular risk assessment
framework or methodology, are justified, or whether detect and respond incentives
are sufficient to cause appropriate mitigation of risks by regulated entities.

Supporting compliance by small to medium enterprises
161. The Law Council acknowledges that, depending on the regulatory approach,
introducing regulation of Al could impact innovation, particularly for small to medium
enterprises (SMEs). Any regulatory approach must avoid duplication, be consistent
and proportionate, and provide clear guidance and support for compliance to SMEs.

162. The Law Council supports education and other tools to encourage and facilitate
compliance. Organisations and industry should be supported to assess the risks to
individuals and the impact on their fundamental rights which may result from Al use.
This would be in addition to education around existing legal frameworks that apply to
the development and use of Al, including with respect to privacy and consumer
protection. Thought should also be given to the key actors in Al applications having
responsibility to assist in relevant assessments by small business.

163. Examples of tools to support positive business practices from the National Al Centre
and Singapore are outlined in the Discussion Paper.85 The model contractual
clauses published by the European Commission for data transfers between EU and
non-EU countries are another example of a tool that could warrant consideration for
the Australian context.

164. As discussed, the Law Council supports more general education for the public with
respect to Al.

Infrastructure and AI
165. Use of AI in the operation of infrastructure is likely to grow significantly in coming
years. This will require wide-reaching safeguards, especially for critical
infrastructure. Current critical infrastructure legislation does not, in the Law
Council’s view, sufficiently address the operation of AI.

84 Regulation (EU) No 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1 (‘GDPR’).
85 Discussion Paper, 15, 24.

Safe and responsible AI in Australia Page 38
Consideration of intellectual property
166. The Discussion Paper expressly states it will not consider intellectual property,
particularly copyright. The Discussion Paper says copyright issues will be
separately discussed at a ‘Ministerial Roundtable on Copyright’ forum established by
the Attorney-General. The Law Council understands AI will be addressed in the next
roundtable discussion in late August 2023.

167. For present purposes, the Law Council notes there are a number of important issues
in relation to the intellectual property implications of AI. Generative AI is created
through significant data input. LLMs are trained on the basis of enormous volumes
of text and MFMs are trained on equally voluminous quantities of data, not limited to
text but also including images and speech. To date, there has been no detailed
policy consideration of these intellectual property issues, including considerations in
connection with the use of copyright. Resolving these issues is critical for
stakeholders involved in the process. The Law Council urges expeditious
consideration of these intellectual property issues and looks forward to contributing
to that process.

Justice system and the legal sector
168. Apart from automated transcription services, the Law Council does not support the
use of ADM and AI processes in Australian courtrooms or the justice system more
broadly, particularly where it would limit judicial discretion. The Law Council is
concerned that, if used ADM processes which rely on factors derived from historic
data to predict future outcomes would raise serious issues for the entitlement to due
process and compromise judgments predicated on the overall circumstances of the
case. ADM processes are inappropriate for use in relation to decisions which could
have a significant effect on individual liberties and freedoms.

169. Members of the legal profession have queried whether there is a need for a specific
framework encompassing ethical standards and other professional obligations, and
guidance on what can and cannot be completed by Al in legal practice, and the level
of transparency required by legal practitioners as to the use of Al. Law practices are
already using Al in various forms (including generative Al) and at varying
levels. However, reliance on AI tools in legal practice does not diminish the
professional judgment a legal practitioner is expected to bring to a client’s matter. It
has been suggested that there should be consistency across areas of practice and
that any framework on Al use in legal practice should be clear and easily understood
so as not to diminish the quality of legal services. The Law Council and its
Constituent Bodies have an important role to perform in setting appropriate
standards and providing guidance to the legal profession.

Safe and responsible AI in Australia Page 39

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.