Make a submission: Published response
Published name
Upload 1
9 August 2023
Human Technology
Ins1tute
Human Technology
TEXT
Ins1tute
Department of
Department of Industry, Science and Resources
Discussion Paper, ‘Safe and responsible AI in Australia’
Industry, Science
Submission
Human Technology Institute, UTS
and Resources
August 2023
Discussion Paper,
9 August 2023
About the Human Technology Institute
The Human Technology Institute (HTI) is building a future that applies human values to new technology. HTI embodies the strategic vision of the University of Technology Sydney (UTS) to be a leading public university of technology, recognised for its global impact specifically in the responsible development, use and regulation of technology. HTI is an authoritative voice in
Australia and internationally on human-centred technology. HTI works with communities and organisations to develop skills, tools and policy that ensure new and emerging technologies are safe, fair and inclusive and do not replicate and entrench existing inequalities.
The work of HTI is informed by a multi-disciplinary approach with expertise in data science, law and governance, policy and human rights.
In this submission, HTI draws on several of its major projects, including:
Facial Recognition Technology: Towards a model law. In a world-leading report published in
September 2022, HTI outlined a model law to govern facial recognition technology in Australia.
AI Corporate Governance Program, which is aiming to broaden the understanding of corporate accountability and governance in the development and use of AI
The Future of AI Regulation in Australia, which is considering the major legal and policy issues related to AI and will present a roadmap for reform.
For more information, contact us at hti@uts.edu.au
Acknowledgement of Country
UTS acknowledges the Gadigal people of the Eora Nation, the Boorooberongal people of the
Dharug Nation, the Bidiagal people and the Gamaygal people upon whose ancestral lands our university stands. We would also like to pay respect to the Elders both past and present, acknowledging them as the traditional custodians of knowledge for these lands.
Authors: Prof Nicholas Davis, Sophie Farthing and Prof Edward Santow
HTI acknowledges the contribution and support of India Monaghan, Secondee –
HTI Policy.
To discuss this submission, please contact us at hti@uts.edu.au.
9 August 2023
Table of contents
Executive summary 2
List of recommendations in this submission 3
Background 5
The need for regulatory reform 5
The role of humans in sociotechnical systems that use AI 7
HTI’s work on AI regulation and governance 8
Part 1: An AI regulatory strategy for Australia 10
Outline of Part 1 of this submission and Recommendation 1 10
Regulation must be practical and effective 10
A risk-based approach to AI 11
The default: technology-neutral law 13
Part 2: An agenda for regulatory reform 15
Outline of Part 2 of this submission 15
Recommendation 2: implementing landmark review proposals 15
Recommendation 3: legal gap analysis regarding development & use of AI 18
Recommendation 4: an AI Commissioner 19
Recommendation 5: an AI Assurance Framework for the private sector 20
AI assurance frameworks 20
Australian public sector AI assurance frameworks 21
An Industry AI Assurance Framework 22
Recommendation 6: an Australian AI Act 23
Human Technology Institute
1
9 August 2023
Executive summary
The Department of Industry, Science and Resources (DISR) Discussion Paper,
Safe and responsible AI in Australia (June 2023) (the Discussion Paper)
proposes a clear policy intent for the Australian Government regarding artificial
intelligence (AI) and related technologies. That policy intent might be
summarised as follows: well-considered regulation and governance measures
can build public trust, thereby enabling Australia’s ‘economy and society to reap
the full benefits of these productivity-enhancing technologies’.1
The Human Technology Institute (HTI) commends this policy intent. It sets an
appropriate balance between promoting positive innovation for economic and
broader gains, while also ensuring that Australians are protected from harm. AI
promises significant benefits for Australians. In order to realise those benefits
without causing harm, it is important that we develop and deploy AI systems in
safe and responsible ways.
As the Discussion Paper makes clear, achieving AI’s promise will be possible
only if Australians trust the underlying technology, as well as how AI is used by
the public and private sector. Community trust is especially important where AI
is used in high-stakes decision making.
While there has been an almost exponential rise in the development and use of
AI, leading research reflects persistently low levels of community trust in AI.
Only a third of Australians trust AI systems, and fewer than half of Australians
perceive that the benefits of AI applications outweigh the associated risks. Such
research findings reflect a perceived failure, to date, on the part of both industry
and government to address a wide range of substantive concerns about AI,
including in relation to cybersecurity and data-sharing risks, deskilling and
subsequent technological unemployment, and threats to human rights.
Addressing these concerns will require increasing the trustworthiness of AI as it
is applied by businesses, governments, and others. One critical driver of
trustworthiness is the existence of effective, fit-for-purpose regulation. Where
clear legal guardrails promote safe and responsible innovation, and the law
provides for readily available forms of redress when technology is misused or
otherwise results in harm, community confidence around the safety and benefits
of technology will tend to improve.
Regulation is sometimes held out as the enemy of innovation – the idea is that
regulation unhelpfully puts a brake on the development of new, beneficial
products and services. Poorly drafted laws can indeed have a net negative
impact on innovation. However, regulation, per se, is not the problem. Where
the law is inadequate or uncertain, this can encourage harmful innovation and
discourage responsible innovation. Furthermore, centuries of evidence proves
that well-conceived legal guardrails can simultaneously protect the community
while actively fostering innovation by setting clear parameters within which
1 Department of Industry, Science and Resources, Safe and responsible AI in Australia (Discussion Paper, June 2023) 4.
Human Technology Institute
2
9 August 2023
innovators can operate. It is essential that Australia develop and adopt
regulation which promotes innovation in this way.
Australian law, like the laws of all comparable jurisdictions, is generally
technology-neutral. This means that legal obligations already apply to the
development and use of AI in the same way as they do to other technologies. In
considering how Australia should regulate AI, the first step is to consider how
current laws apply, and identify any barriers to the effective enforcement of
existing obligations.
There are also gaps in our existing law as it applies to AI. Rather than
identifying and filling these gaps through law reform, both government and
industry have, to date, over-relied on self-regulatory measures (such as ethical
guidance), which have had limited impact on changing behaviours.2 Gaps in the
law should be filled, and low-impact, self-regulatory measures should be
augmented with more effective measures.
More fundamentally, Australia has the opportunity to take an economy-wide
regulatory approach to AI. Such an approach could help ensure that our law is
effective, coherent and innovation-enhancing, while also safeguarding against
risks of harm.
Over the last decade, as AI has driven the Fourth Industrial Revolution around
the world, Australia has been slow to adopt a clear and effective policy and
regulatory strategy. The establishment of a clear policy intent – one that
balances the needs of the economy and Australians as a whole – is a welcome
first step.
There is now an urgent task to realise this policy intent through reform. As
summarised in the six recommendations set out below, and elaborated on
throughout this submission, HTI urges the Australian Government to adopt a
strong strategic framework for how it will regulate in respect of the development
and use of AI. This framework should underpin a series of positive reforms that
HTI has outlined below.
List of recommendations in this submission
Recommendation 1
The Australian Government should develop a regulatory strategy for AI
(Australia’s AI Regulatory Strategy). It should
• be practical and effective – this will involve a combination of hard and
soft law, and both self- and co-regulatory measures
• pursue a clear aim – namely, to encourage innovation for public benefit,
while upholding human rights and other community protections
• promote national coherence and efficiency – this means better
coordination across Australian Government departments and agencies,
2 Australian Human Rights Commission, Human Rights and Technology (Report, March 2021) 27, 87.
Human Technology Institute
3
9 August 2023
and a harmonised regulatory approach in the federal, state, and territory
jurisdictions
• generally adopt a technology-neutral approach – except where this
approach would be inadequate to harness an opportunity or address a
risk of harm
• adopt a risk-based approach – which clearly articulates legal and
broader responsibility across the AI life cycle of design, development,
and deployment
• establish consultative mechanisms to support ongoing engagement with
stakeholders including civil society, industry and technical experts.
Recommendation 2
The Australian Government should do a stocktake of reform recommendations
arising from recent landmark reports relating to AI, conducted by bodies
including the Australian Competition and Consumer Commission, the Australian
Human Rights Commission, and the Attorney-General’s Department. The
Australian Government should prioritise reform proposed in those reports.
Recommendation 3
The Australian Government should undertake a legal gap analysis, focused on
areas where AI presents an especially significant risk of harm. The Australian
Government should prioritise reform that addresses those risks.
Recommendation 4
The Australian Government should establish an ‘AI Commissioner’ to provide
independent expert advice to government and regulators, and to provide
guidance on law and ethics for industry, civil society and academia.
Recommendation 5
The Australian Government should work with independent experts to develop
an AI assurance framework that would apply to the private sector in Australia
(an Industry AI Assurance Framework).
Recommendation 6
Australia should adopt framework legislation for AI (an Australian AI Act). The
proposed Australian AI Act should advance Recommendations 2-5 above. It
should also support the Australian Government in ensuring parity of legal
protections for Australians, as compared with citizens of the European Union
and other leading jurisdictions. However, the Australian Government should not
seek to replicate the text and structure of the EU’s draft AI Act in Australian law.
Human Technology Institute
4
9 August 2023
Background
HTI welcomes the opportunity to comment on the Discussion Paper. The
increasing uptake of AI by Australian businesses and government presents
enormous opportunities for Australian society. From forecasts of significant
economic benefit,3 to identifying solutions to society-wide problems such as
climate change,4 there is enormous potential for AI to meet some of the most
challenging and complex issues of our time.
In this submission, HTI draws on its expertise in AI governance and regulation.
It makes recommendations to support the safe and responsible use of AI in the
private sector, especially through reform to regulation and governance.
Given HTI is currently undertaking a major project on AI regulation, there are
some questions and issues raised in the Discussion Paper about which HTI
does not yet have a settled view. HTI would welcome the opportunity to update
DISR, and other parts of the Australian Government, as it progresses this work
and develops further recommendations in this area.
The need for regulatory reform
Australia does not have an effective, coherent regulatory framework that
provides appropriate safeguards to ensure the safe and responsible use of AI.
Nor has there been, to date, a concerted effort to align Australian law with the
various strategic goals Australia has set regarding AI. There is now the
opportunity for the Australian Government to achieve both, through the creation
of an AI regulatory strategy.
AI is rapidly becoming essential to how Australian businesses create value. HTI
research indicates that a large number of Australian organisations rely on AI-
driven systems.5 AI systems are penetrating to the core of business models,
promising significant gains in both efficiency and productivity. HTI research
further indicates that few senior executives are fully aware of the extent of this
reliance. Many AI services are embedded in third-party software systems,
deployed by suppliers further up the supply chain, or used by employees
without management knowledge or oversight.6
As the development of AI accelerates, Australian businesses are increasingly
exposed to a range of new and exacerbated commercial, regulatory, and
reputational risks. Individuals and communities can and do suffer irreversible
harm when AI systems fail, are misused, or deployed in inappropriate contexts.
At a societal level, AI can be used in ways that increase inequality, undermine
3 See, for example, Microsoft and Tech Council of Australia, Australia’s Generative AI opportunity (Report, July 2023).
4 See, for example, Hamid Maher et al, ‘AI is essential for solving the climate crisis’, Boston Consulting Group (Slideshow, 7 July
2022) .
Human Technology Institute
20
9 August 2023
necessarily used as audit tools, assurance frameworks can also perform a
similar function.
Australian public sector AI assurance frameworks
At the June 2023 Data and Digital Ministers Meeting, federal, state, and territory
Ministers agreed to work towards a nationally consistent approach to the safe
and ethical use of AI by government. A number of Ministers have since pointed
to the prospect of developing an AI assurance framework that would apply to
Australian Government agencies – modelled on the NSW AI Assurance
Framework.
The NSW AI Assurance Framework (see Box 3 below) has become a national
and international reference point for public and private sector bodies seeking to
assess their use of AI systems. Several NSW Government agencies have used
it to assess their proposed and active AI projects. It has also helped raise
awareness among NSW Government agencies about taking a risk-responsive
approach to developing and using AI.
HTI endorses the Australian Government’s consideration of a federal AI
assurance framework for government agencies. HTI understands that, following
the June 2023 Data and Digital Ministers Meeting, work towards an AI
assurance framework related to government development and use of AI is being
led within the Australian Government by the Minister for Finance.
Most relevantly therefore for DISR, HTI, in this submission, focuses on its
recommendation for the development of an AI assurance framework that would
be directed to the private sector. In other words, HTI urges the Australian
Government to use the NSW AI Assurance Framework as a model for a
separate but related reform that would apply this approach to industry.
Box 3: NSW’s AI Assurance Framework
In Australia, only NSW has implemented an AI assurance framework. NSW’s AI
Assurance Framework is the first mandatory formal government policy in
Australia to promote the responsible and ethical development and use of AI
systems by government. It contains obligations and considerations that apply to
all NSW Government agencies in their development and use of AI.
As summarised in Figure 1 below, the NSW AI Assurance Framework operates
as follows:
• subject to some exemptions, all NSW Government agencies must consider
the NSW AI Assurance Framework for any project that relies significantly on
AI. For smaller projects, the agency is required simply to undertake a self-
assessment process by reference to the NSW AI Assurance Framework.
• for AI projects valued over $5 million, or funded from the Digital Restart
Fund, there is an external scrutiny process involving the NSW Government
AI Review Committee.
Human Technology Institute
21
9 August 2023
Figure 1: Operation of NSW AI Assurance Framework
An Industry AI Assurance Framework
The same reasons that make AI assurance frameworks attractive as
compliance tools within the public sector also apply to the private sector. An
Industry AI Assurance Framework would provide a convenient way of combining
the key legal requirements with good-practice considerations relevant in the
development and use of AI by industry, especially where systems could
unlawfully limit the rights of Australians or otherwise cause significant harm.
In addition to drawing attention to obligations arising from primary legislation, an
Industry AI Assurance Framework could also give additional impetus to leading
soft law measures, such as international technical standards. Moreover, if the
Industry AI Assurance Framework itself were a form of subordinate legislation, it
could be updated more expeditiously than primary legislation – something that
is especially important in the context of a rapidly-evolving field such as AI.
There is currently no mandatory AI assurance framework that applies to the
private sector in Australia. For three reasons, however, an Industry AI
Assurance Framework likely would be familiar – at least in many key respects –
for the private sector in Australia.
First, the concept of an AI assurance framework draws heavily on a significant
body of research related to AI or algorithmic impact assessments. Such
measures were strongly supported by stakeholders in industry and community
consultation conducted by the AHRC, and ultimately by the AHRC itself.39 They
have also been supported in leading research,40 and by international bodies.41
39 Australian Human Rights Commission, Human Rights and Technology (Report, March 2021), Chapters 5 and 7.
40 See, eg, AI Now Institute, Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability (Report,
April 2018).
41 See, eg, Council of Europe, Unboxing Artificial Intelligence: 10 Steps to protect Human Rights (Recommendation, May 2019)
7.
Human Technology Institute
22
9 August 2023
Secondly, an Industry AI Assurance Framework could incorporate or emulate
co-regulatory measures that have had proven success, including in the area of
audit and risk.
Thirdly, while the current NSW AI Assurance Framework – and others being
considered by Australia’s Data and Digital Ministers – apply directly to the public
sector, such frameworks necessarily have a ‘horizontal effect’ that applies to the
private sector. Government agencies rarely, if ever, develop AI tools entirely in
house. Almost always, they do so through partnerships or outsourcing
arrangements involving private sector companies. This means that the
requirements in, say, the NSW AI Assurance Framework would be familiar to
any company engaging in such government work. In this way, an Industry AI
Assurance Framework would simply be an incremental extension of an already-
familiar set of obligations and considerations.
HTI has extensive experience and expertise developing AI assurance
frameworks for the public and private sector. HTI is currently working on a
review of the NSW AI Assurance Framework, and is working with several
Australian businesses to develop commercial assurance processes.
HTI would be pleased to provide further information to DISR on this work.
Recommendation 6: an Australian AI Act
Finally, to draw these strands together, Australia should adopt framework
legislation for AI. The proposed Australian AI Act would present an opportunity
to advance the four reform processes outlined above.
HTI does not recommend that Australian Parliament adopt a single statute that
attempts to be a comprehensive source of all legal obligations applicable to the
development and use of AI. While this regulatory approach may be more
appealing in some other jurisdictions, it would not be appropriate in the
Australian context. Hence, HTI does not recommend that the Australian
Government seek to replicate the structure and text (ie, the specific wording of
the provisions) of the EU’s draft AI Act in Australian law.
As previously noted, HTI also urges the Australian Government to develop and
adopt technology-neutral legislation wherever possible, reserving technology-
specific rules for AI systems only when broad-based instruments cannot
effectively achieve the relevant regulatory objectives for a particular technology.
However, this submission emphasises the importance of ensuring that, in
adopting its own regulatory approach, the Australian Government nevertheless
ensures that Australians receive an equivalent level of protection when
compared with citizens from the EU in respect of threats of harm associated
with the development and use of AI.
Without in any way derogating from this recommendation, HTI considers there
would be utility in adopting an Australian AI Act. This reform would be focused
on the following:
Human Technology Institute
23
9 August 2023
• it would provide a mechanism by which to introduce a range of legislative
reforms, including to other legislation, arising from the leading official
reviews referred to in Recommendation 2 above.
• as framework legislation, the proposed Australian AI Act could be a
central source of legislative authority on a range of legal issues
associated with AI – especially in defining key terms. For example, this
Act could clarify the definition of ‘reasons’ as it applies in the context of
AI-informed decision making.
• the proposed Australian AI Act could provide the legislative basis for
establishing the AI Commissioner referred to in Recommendation 4
above, as well as this body’s functions and statutory independence.
• the proposed Australian AI Act could vest a rule-making power in the
Minister for Industry and Science. This would enable the Minister to use
subordinate legislation for co-regulatory initiatives such as the creation
and updating of an Industry AI Assurance Framework, as outlined in
Recommendation 5 above.
Human Technology Institute
24