The Australian Government is now operating in line with Caretaker Conventions, pending the outcome of the 2025 federal election.

Make a submission: Published response

#409
The University of Sydney, academics from the Media & Communications and Government & International Relations disciplines, Faculty of Arts and Social Sciences
4 Aug 2023

Published name

The University of Sydney, academics from the Media & Communications and Government & International Relations disciplines, Faculty of Arts and Social Sciences

Upload 1

Automated Transcription

FACULTY OF ARTS AND SOCIAL SCIENCES

4 August 2023

We thank the Department of Industry, Science and Resources for the opportunity to respond to the Safe and Responsible AI in Australia Discussion Paper.

In light of the enormous social, economic, political, cultural and ethical challenges presented by rapid developments in artificial intelligence (AI), and particularly generative artificial intelligence, the opportunity to participate in a policy deliberation process that aims to address questions of the social good at an early stage, and to design suitable regulations to meet such challenges, is very much welcomed.

Our submission is a collaborative enterprise between academic researchers in the
Disciplines of Media and Communications and Government and International
Relations in the Faculty of Arts and Social Sciences at The University of Sydney. It is a collectively authored document that has arisen out of collaborative discussions among a diverse group of researchers with a shared interest in the digital, and a shared focus upon the common good.

In preparing this submission we also recognise and pay respect to the Elders and communities – past, present, and emerging – of the lands that The University of
Sydney’s campuses stand on. For thousands of years they have shared and exchanged knowledges across innumerable generations for the benefit of all.

Terry Flew

How to cite this document:

Flew, T., Chesher, C., Hutchinson, J., Stilinovic, M., Balio, F., Gray, J., Lumby, C.,
Stepnik, A., Goggin, G. and Humphry, J. (2023), Safe and Responsible AI in
Australia: Submission Paper, prepared 4 August.
https://ses.library.usyd.edu.au/handle/2123/31527

Safe and Responsible AI in Australia – A submission paper from the University of Sydney

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.

Upload 2

Automated Transcription

Safe and Responsible AI in Australia
Submission Paper

The University of Sydney, academics from the Media & Communications and
Government & International Relations disciplines, Faculty of Arts and Social Sciences

Response to the Australian Government Department of Industry, Science and Resources,
Safe and Responsible AI in Australia Discussion Paper https://consult.industry.gov.au/supporting-responsible-ai

Paper prepared 4 August 2023
FACULTY OF ARTS AND SOCIAL SCIENCES

Safe and Responsible AI in Australia
Defining Artificial ‘Intelligence’ ............................................................................... 5
Articulating Risks ...................................................................................................... 6
Tackling New Approaches ......................................................................................... 8
Risk-Based Approaches ........................................................................................... 10
References Cited ...................................................................................................... 12
Authors ..................................................................................................................... 13
Contact ..................................................................................................................... 17

Safe and Responsible AI in Australia – A submission paper from the University of Sydney
FACULTY OF ARTS AND SOCIAL SCIENCES

4 August 2023

We thank the Department of Industry, Science and Resources for the opportunity to respond to the Safe and Responsible AI in Australia Discussion Paper.

In light of the enormous social, economic, political, cultural and ethical challenges presented by rapid developments in artificial intelligence (AI), and particularly generative artificial intelligence, the opportunity to participate in a policy deliberation process that aims to address questions of the social good at an early stage, and to design suitable regulations to meet such challenges, is very much welcomed.

Our submission is a collaborative enterprise between academic researchers in the
Disciplines of Media and Communications and Government and International
Relations in the Faculty of Arts and Social Sciences at The University of Sydney. It is a collectively authored document that has arisen out of collaborative discussions among a diverse group of researchers with a shared interest in the digital, and a shared focus upon the common good.

In preparing this submission we also recognise and pay respect to the Elders and communities – past, present, and emerging – of the lands that The University of
Sydney’s campuses stand on. For thousands of years they have shared and exchanged knowledges across innumerable generations for the benefit of all.

Terry Flew

How to cite this document:

Flew, T., Chesher, C., Hutchinson, J., Stilinovic, M., Balio, F., Gray, J., Lumby, C.,
Stepnik, A., Goggin, G. and Humphry, J. (2023), Safe and Responsible AI in
Australia: Submission Paper, prepared 4 August.
https://ses.library.usyd.edu.au/handle/2123/31527

Safe and Responsible AI in Australia – A submission paper from the University of Sydney
FACULTY OF ARTS AND SOCIAL SCIENCES

Defining Artificial ‘Intelligence’
1. Do you agree with the definitions in this discussion paper? If not, what
definitions do you prefer and why?

While there are many variations, there is currently no comprehensive definition of artificial intelligence (AI). Actors such as the Australian Human Rights Commission
(AHRC) have alerted to the issues that may arise from poorly defined definitions that fail to move beyond describing a constellation or cluster of technologies. According to the AHRC’s 2021 Human Rights and Technology Report, employing the universally accepted term of AI is both ambiguous and misleading. Further, that AI, as is currently defined, does not consider the implications of new forms of AI arising in the future. The lack of consensus could have implications on how policies pertaining to the regulation of AI are shaped, along with their outcomes now and in the future.

Defining AI has not only drawn the attention of industry but has also piqued the interest of academics and news media. For example, TechCrunch (Coldewey, 2023) alerted to the quagmire of pairing the terms ‘artificial’ and ‘intelligence’, claiming
“[there is] no one definition of intelligence out there, but what these systems do is definitely closer to calculators than brains”.

A similar scope can be found in scholarly work. Computational science pioneer David
B. Fogel (2022: 115) argues that “it is not enough to ‘fake’ the intelligence or mimic its overt consequences.” Others have aimed to define AI through categorisation. For example, Friedrich and others (2022) alert to the previous attempts to categorise artificial intelligence as either ‘weak’ or ‘strong’. More specifically, “strong AI essentially describes a form of machine intelligence that is equal to human intelligence or even improves upon it, while weak AI (sometimes also referred to as narrow AI) is limited to tractable applications in specific domains” (p.824).

However, others argue that any ‘intelligence’ a human made system presents should be distinguished from actual human intelligence (see Wang, 2019). Hence, the problematic nature of applying the term ‘artificial intelligence’, as it assumes a form of agency to machines that essentially mimic human behaviour. As Mitchell writes, machines are not ‘there yet’, even assuming a narrow definition of intelligence that includes learning, generalising/abstracting, and applying to new situations. "Taken together, these problems make it hard to conclude—from the evidence given—that AI systems are now or soon will match or exceed human intelligence. The assumptions that we make for humans—that they cannot memorize vast collections of text related to test questions, and when they answer questions correctly, they will be able to generalize that understanding to new situations—are not yet appropriate for AI systems."

Currently, the realm of policymaking widely uses the term Artificial Intelligence. For example, the United Kingdom’s Information Commissioner’s Office (UK ICO), uses

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 5
FACULTY OF ARTS AND SOCIAL SCIENCES the umbrella term AI because it ‘has become a mainstream way for organisations to refer to a range of technologies that mimic human thought.’
Article 5:2 of the proposed European Union Artificial Intelligence Act equally employs the term ‘Artificial Intelligence’ to describe a “fast evolving family of technologies that can bring a wide array of economic and societal benefits across the entire spectrum of industries and social activities”. The second edition of Singapore’s
Model Artificial Intelligence Governance Framework defines AI as “a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation, and/or classification)”.

However, using the term "intelligence" in referring to predictive statistical models misleads about the nature of these products and, more troubling, can contribute to offering a blank justification for their outcomes (intelligent decisions vs statistical estimates). Moreover, it can mislead about the computational processes leading to these outcomes, implying that some form of moral actor is responsible for decisions that are instead probabilistic and without any human intervention.

Contrary to industry norms, the OECD, in its AI Principles Overview, employs the addition of the term ‘systems’ to AI to define a “machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives”. Complementing this is the IEEE Position
Statement that also employs the word 'system' to differentiate between human and artificial intelligence. Namely, “artificial intelligence involves computational technologies that are inspired by—but typically operate differently from—the way people and other biological organisms sense, learn, reason, and take action”.

As such, this submission recommends:
• To employ the term ‘system’ to AI to distinguish human made from actual
human intelligence to avoid prescribing agency to systems that mimic human
behaviour.
• To move beyond the scope of ‘weak’ and ‘strong’ definitions, as ‘weak’ AI
may offer a broader range of applications which, in turn, can result in a
broader range of risks, long and short-term.
• To develop a set of definitions, categorised by models, as opposed to a single
‘umbrella’ definition which may be ambiguous, misleading, and disregard
future applications.

Articulating Risks
2. What potential risks from AI are not covered by Australia’s existing regulatory approaches? Do you have suggestions for possible regulatory action to mitigate these risks?

The rapid development of generative AI models has led to significant advancements in the field of text-to-image AI. These models have the potential to revolutionise

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 6
FACULTY OF ARTS AND SOCIAL SCIENCES creative industries by automating the generation of original artistic works based on textual prompts. However, the use of artists' names in prompts raises concerns regarding moral rights, including the right to be recognised as the creator of a work and the right to protect the integrity of that work.

This policy document aims to address these concerns by outlining the potential impact of generative AI models on moral rights and proposing guidelines to ensure that artists' rights are respected and protected.

Generative AI models, such as Midjourney and DALL-E, have demonstrated the ability to generate high-quality images based on textual prompts. These models can interpret the prompts and produce original artwork that may closely resemble the style of well-known artists. However, the use of artists' names in prompts may lead to the infringement of moral rights, as the generated images could be falsely attributed to the artists or may not accurately represent their artistic vision.

Moral Rights Concerns
The use of generative AI models in the context of text-to-image AI raises two primary concerns related to moral rights:

Right of Attribution
When an artist's name is used in a prompt, there is the potential for the generated image to be falsely attributed to the artist. This misattribution not only infringes upon the artist's right to be recognized as the creator of their work but may also damage their reputation if the generated image is of poor quality or inconsistent with their artistic style.

Right of Integrity
The use of an artist's name in a prompt may also result in the generation of images that distort or modify the artist's original work, violating their right to protect the integrity of their creations. This may occur if the AI model generates an image that combines elements from multiple works or creates a derivative work that does not accurately represent the artist's vision.

Proposed Guidelines
To address these concerns, the following guidelines are proposed for the use of generative AI models in text-to-image AI applications:

Explicit Consent
AI developers must obtain explicit consent from artists before allowing their names to be used in prompts. This ensures that artists are aware of the potential use of their names and have the opportunity to approve or reject their association with the generated images.

Clear Attribution
Ensure that generated images are clearly attributed to the AI model, not the artist, to avoid confusion and potential misattribution. This can be achieved through the use of watermarks, metadata, or other forms of labeling that clearly indicate the source of the generated image.

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 7
FACULTY OF ARTS AND SOCIAL SCIENCES

Respect for Artistic Integrity
Develop and implement guidelines and best practices for AI-generated images that respect the artistic integrity of the artists whose names are used in prompts. This may include avoiding the use of specific elements from artists' works, ensuring that generated images do not closely resemble existing works, and providing artists with the ability to review and approve or reject generated images before they are made public.

Conclusion
The use of generative AI models in text-to-image AI applications has the potential to transform creative industries. However, it is essential to balance the benefits of this technology with the need to protect and respect artists' moral rights. By implementing the proposed guidelines, it is possible to ensure that artists' rights are upheld while fostering innovation and creativity in the field of AI-generated art.

Tackling New Approaches
6. Should different approaches apply to public and private sector use of AI technologies? If so, how should the approaches differ?

Introduction
Corporations are often the sole repositories of the tech expertise required to understand (and assess) the models they deploy fully and with the pool of experts being relatively small—and because expertise depends on expensive computational facilities—there is a serious risk of a lack of independence among those with sufficient expertise to control, assess and regulate AI systems. Creating public and independently-funded agencies with access to both computational and data resources will allow the creation of independent institutions which could check on claims by corporate actors about their product and competently investigate a wider range of potential negative outcomes that corporations are unwilling to investigate.

Case study – Newsbots and generative AI as editorial process
While the potential of integrating generative AI into the news cycle presents advantages, there remain several areas of concern. Its use within the news cycle is especially helpful for summarising the key elements of news articles (see the Artifact app especially), writing suitable headlines, or even translating the news for niche and new audiences. From the perspective of the Australian Broadcasting Corporation, this aligns with their remit to use emerging technologies to keep marginalised groups informed and educated, while removing laborious tasks form its journalists. However, relying on AI to write and curate content in the same way an editor does poses significant issues including generating incorrect or bias information, omitting the option for a ‘right of reply’ for the subject of any given article or investigative journalism piece, and in many instances does not comply with the legal requirements of the institutions engaging with this activity.

Further to these news and media organisation complications, the integration of
Newsbots in this process also complicates the integration of AI into the news cycle.
Building on the moral and ethical issues outlined in the generative AI section of this

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 8
FACULTY OF ARTS AND SOCIAL SCIENCES submission, Newsbots integrate the conversational aspect of AI, which has been researched and proven to be not able to be automated well. In many cases, the reliance on human intervention outweighs the benefit of these seemingly ‘automated’ practices, which at this stage, rejects any potential benefit of speakers and chat bots operating as effective Newsbots built on AI. This aspect of editorial oversight will need strict guidelines in the coming years as the models demonstrate increasing capacity to undertake more complex activities through conversational journalism and to ensure the sources the bots reference are credible and reliable journalistic spaces.

9. Given the importance of transparency across the AI lifecycle, please share your thoughts on:
1. where and when transparency will be most critical and valuable to mitigate
potential AI risks and to improve public trust and confidence in AI?
2. mandating transparency requirements across the private and public sectors,
including how these requirements could be implemented.

There is an inherent tension between safety/privacy and transparency. The industry claims that safety can only be guaranteed if models (especially generative models) are not open-sourced and if access to models by third parties is strictly regulated by the companies themselves. This replicates what happens in the context of the analysis of content distribution on social media, possible only for platforms' researchers or limited to the analysis of data selectively released by the same platforms. For this reason, the corporations developing, owning and marketing predictive statistical models can be vested in framing the AI debate as a debate about "safety".

The academic literature on trust indicates that trust closely correlates with trustworthiness (Hardin, 2002, 2006; Uslaner, 2002). In other words, public trust and confidence in AI will be strengthened if there is a perception that those developing and using it are trustworthy, and a decline in public trust and confidence in AI if it is seen to be abused. This indicates that binding AI Ethical Codes at an industry level will be a necessary but not sufficient condition for engendering public trust and confidence in AI.

Transparency requirements therefore need to be accompanied by meaningful accountability provisions, and effective negative sanctions for the misuse of AI. This means that industry self-regulation and other forms of “soft law” will be insufficient.
There will be a need for government regulators to not only have rules concerning the use of AI, but meaningful penalties that they are prepared to apply to large organisations, in both the public and private sectors, for actions that abuse public trust in AI.

This will require independent government agencies that can take action against other arms of government. The model of anti-corruption commissions, and the recent Royal
Commission into the Robodebt Scheme, provide examples of how this can work in practice.

11. What initiatives or government action can increase public trust in AI deployment to encourage more people to use AI?

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 9
FACULTY OF ARTS AND SOCIAL SCIENCES

Greater public use of AI is not an end in itself. Insofar as there is an “AI dividend” to society, as distinct from companies and agencies that can deploy it to generate new efficiencies in production or new products and services, it does not directly derive from more people making use of AI. It therefore differs from the “digital dividend” that was associated with adoption of the Internet.

Greater public trust in AI deployment will be the result of evidence of transparency, accountability and appropriate governance frameworks around its development and use. This must include a role for government regulatory agencies, and the application of legally binding sanctions for breaches of laws, codes and standards associated with
AI deployment and use.

Risk-Based Approaches
14. Do you support a risk-based approach for addressing potential AI risks? If not, is there a better approach?

As so-called AI systems are integrated into social, economic and political systems to support private and public functions (from targeting audiences to assessing welfare benefits), their potential long-term effects or reversibility cannot be accurately estimated. The widespread application of sophisticated, data-intensive, but often black-box statistical models will likely create a new class of complex socio-technical systems characterised by emerging behaviours and low predictability. In this sense, we suggest differentiating between the application of AI models within well-defined and generally well-understood technological domains (e.g. self-driving cars, medical surgery) and their application into socio-technical domains (e.g. dating apps, search engines, recruitment).

Applying AI systems to social-technical domains is inherently more hazardous because long-term effects depend not only on the technology but also on how people and organisations react and adapt to its implementation. AI systems, in combination with social systems, configure a complex risk - in which feedback loops generated from their interactions lead to complex, unpredictable, risks.

Impact assessment (Appendix C) exercises conducted before deploying AI products are important but necessarily limited in their capacity to foresee who and how will eventually be negatively affected. Continuing impact assessment exercises funded by the AI industry but run independently from them and in the public interest should also be established.

As such, this submission recommends:
• Continuous scenario assessment. Independent authorities should, on a
continuous basis, interact with the system, simulating different realistic
scenarios, with particular attention to scenarios involving vulnerable
populations.
• Decision systems vs supporting systems. The continuous assessment of AI
systems responsible for final decisions (e.g. credit scoring, policing, visa

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 10
FACULTY OF ARTS AND SOCIAL SCIENCES

processing) should be prioritised - whether they are considered low risk or
high risk.

17. What elements should be in a risk-based approach for addressing potential AI risks? Do you support the elements presented in Attachment C?

Risk-based approaches to AI can play a valuable role in preventing harm to users and society particularly if employed to evaluate whether AI will provide a useful solution prior to investment and deployment of these systems. The use of AI systems in socio- technical domains, including in cases where risks may evolve over time, where there is a risk of bias and discrimination or misuse, users of AI systems should also have a on-going duty of care to provide a baseline level of protection from harm that is legislated and enforceable. The UK’s Online Safety Bill provides an illustrative model for this type of duty of care.

20. Should a risk-based approach for responsible AI be a voluntary or self-regulation tool or be mandated through regulation? And should it apply to:
1. public or private organisations or both?
2. developers or deployers or both?

It is important that standards for minimising risk and harm are enforceable under the law—there must be real and substantial consequences and accountability for actors who do not properly assess and mitigate risks. As two decades of social media have shown, relying on self-regulation by industry through voluntary codes of conduct and other soft law arrangements, is insufficient for ensuring that digital technologies operate and are governed in a manner that protects the public interest. For this reason, if a risk-based approach is employed for regulating AI in Australia, it should be mandated through legislation. It should also apply to both public and private organisations because both have significant potential to cause harm through the deployment of AI systems.

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 11
FACULTY OF ARTS AND SOCIAL SCIENCES

References Cited

Coldewey, D. (June, 2023). Age of AI: Everything you need to know about artificial intelligence. Tech Crunch. https://techcrunch.com/2023/06/28/age-of-ai- everything-you-need-to-know-about-artificial-intelligence/. Retrieved 20/09/2023

Fogel, D. B. (2022). Defining artificial intelligence. Machine Learning and the City:
Applications in Architecture and Urban Design, 91-120.

Friedrich, S., Antes, G., Behr, S., Binder, H., Brannath, W., Dumpert, F., ... & Friede,
T. (2022). Is there a role for statistics in artificial intelligence?. Advances in Data
Analysis and Classification, 16(4), 823-846.

Hardin, R. (2002). Trust and Trustworthiness. New York: Russell Sage Foundation.

Hardin, R. (2006). Trust. Cambridge: Polity Press.

Mitchell, M. (2023). How do we know how smart AI systems are?. Science,
381(6654), adj5957

Uslaner, E. M. (2002). The Moral Foundations of Trust. Cambridge: Cambridge
Unviersity Press.

Wang, P. (2019). On defining artificial intelligence. Journal of Artificial General
Intelligence, 10(2), 1-37.

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 12
FACULTY OF ARTS AND SOCIAL SCIENCES

Authors

Professor Terry Flew
Terry Flew is Professor of Digital Communication
& Culture, Faculty of Arts & Social Sciences at
The University of Sydney. He is the author of 16
books (seven edited), including Regulating
Platforms, SAGE Handbook of the Digital Media
Economy, Understanding Global Media, Politics,
Media and Democracy in Australia, Media
Economics and Global Creative Industries. He has
authored 71 book chapters, 114 refereed journal
articles, nine research monographs and nine
commissioned reports.

Dr Chris Chesher

Dr Chris Chesher is Senior Lecturer in Digital
Cultures in the Discipline of Media and
Communications. His transdisciplinary approach to
digital cultures, media studies and cultural studies
connects with philosophy of technology, science
and technology studies, games studies, internet
studies, sociology of technology, human-computer
interaction, social robotics, cultural robotics and
digital humanities.

Dr Jonathon Hutchinson

Jonathon Hutchinson is the Chair of Discipline of
Media and Communications at the University of
Sydney. He is a Chief Investigator on the
Australian Research Council LIEF project, The
International Digital Policy Observatory and is
also a Chief Investigator on the eSafety
Commission Research project, Emerging online
safety issues: co-creating social media education
with young people. He is currently Editor in Chief
of the Policy & Internet Journal.

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 13
FACULTY OF ARTS AND SOCIAL SCIENCES

Mrs Milica Stilinovic

Milica Stilinovic is a PhD candidate at the
University of Sydney, Australia, where she has
lectured on media politics and political
communication. Her current research focuses on
the digitised communication of violence and the
language of violent acts through the lens of far-
right, white nationalist and white supremacist
groups. She is the Managing Editor of Policy and
Internet and an author and journalist whose work
has appeared in Forbes, TIME, BBC, and other
local and international titles.

Dr Francesco Bailo

Francesco Bailo is a Lecturer in the School of
Social and Political Sciences, University of Sydney.
He is interested in researching forms of political
engagement and political talk on social media. He
has researched the emergence and dynamics of
online communities, the role between news
organisations and social media, and the
interdependence between social media activists and
news organisations. He has engaged and applied
quantitative research methods developing expertise
in quantitative text analysis (NLP) and network
analysis.

Dr Joanne Gray

Dr Joanne Gray is a Lecturer in Digital Cultures in
the Discipline of Media and Communications,
Faculty of Arts and Social Sciences. She is an
interdisciplinary academic with expertise in digital
platform policy and governance. Her research seeks
to understand how digital platforms, such as
Google/Alphabet and Facebook/Meta, exercise
private power and explore relevant policy options.

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 14
FACULTY OF ARTS AND SOCIAL SCIENCES

Professor Catharine Lumby

Catharine Lumby is a Professor of Media at the
University of Sydney where she was founding
Chair of the Media and Communications
Department. Prior to entering academia, she
worked for two decades as a print and TV journalist
for the Sydney Morning Herald the ABC and The
Bulletin magazine. She has written and co-authored
ten books and numerous book chapters and journal
articles and recently completed a biography of
Frank Moorhouse.
Ms Agata Stepnik

Agata Stepnik is a research officer in the Discipline
of Media and Communication at the University of
Sydney. Her research interests include user agency
in recommender systems, and news consumption
practices on social media platforms.

Professor Gerard Goggin
Gerard Goggin is the inaugural Professor of Media
and Communications at the University of Sydney, a
position he has held since 2011. Previous
appointments include Professor of Digital
Communications at University of New South Wales
(2007-2010), the University of Queensland,
Southern Cross University, and, as visiting
professor, the University of Barcelona.

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 15
FACULTY OF ARTS AND SOCIAL SCIENCES

Dr Justine Humphry

Justine Humphry is a Senior Lecturer in Digital
Cultures in the Discipline of Media and
Communications at the University of Sydney. Her
previous appointments include Lecturer in Cultural
and Social Analysis at Western Sydney University
and Research Fellow in Digital Media at the
University of Sydney. Justine researches the
cultures and politics of digital media and emerging
technologies with a focus on the social
consequences of mobile, smart and data-driven
technologies.

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 16
FACULTY OF ARTS AND SOCIAL SCIENCES

Contact
Professor Terry Flew
Faculty of Arts and Social Sciences, School of Art, Communication and English

Ph: +61 2 9351 7517, +61 405 070 980
Address: Room S2.08 A20 John Woolley Building, The University of Sydney, NSW,
2006
Email: terry.flew@sydney.edu.au

CRICOS 00026A

Safe and Responsible AI in Australia – A submission paper from the University of Sydney 17

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.

Do you agree with the definitions in this discussion paper? If not, what definitions do you prefer and why?

While there are many variations, there is currently no comprehensive definition of artificial intelligence (AI). Actors such as the Australian Human Rights Commission (AHRC) have alerted to the issues that may arise from poorly defined definitions that fail to move beyond describing a constellation or cluster of technologies. According to the AHRC’s 2021 Human Rights and Technology Report, employing the universally accepted term of AI is both ambiguous and misleading. Further, that AI, as is currently defined, does not consider the implications of new forms of AI arising in the future. The lack of consensus could have implications on how policies pertaining to the regulation of AI are shaped, along with their outcomes now and in the future.

Defining AI has not only drawn the attention of industry but has also piqued the interest of academics and news media. For example, TechCrunch (Coldewey, 2023) alerted to the quagmire of pairing the terms ‘artificial’ and ‘intelligence’, claiming “[there is] no one definition of intelligence out there, but what these systems do is definitely closer to calculators than brains”.

A similar scope can be found in scholarly work. Computational science pioneer David B. Fogel (2022, 115) argues that “it is not enough to ‘fake’ the intelligence or mimic its overt consequences.” Others have aimed to define AI through categorisation. For example, Friedrich and others (2022) alert to the previous attempts to categorise artificial intelligence as either ‘weak’ or ‘strong’. More specifically, “strong AI essentially describes a form of machine intelligence that is equal to human intelligence or even improves upon it, while weak AI (sometimes also referred to as narrow AI) is limited to tractable applications in specific domains” (p.824).

However, others argue that any ‘intelligence’ a human made system presents should be distinguished from actual human intelligence (see Wang, 2019). Hence, the problematic nature of applying the term ‘artificial intelligence’, as it assumes a form of agency to machines that essentially mimic human behaviour. As Mitchell writes, machines are not ‘there yet’, even assuming a narrow definition of intelligence that includes learning, generalising/abstracting, and applying to new situations. "Taken together, these problems make it hard to conclude—from the evidence given—that AI systems are now or soon will match or exceed human intelligence. The assumptions that we make for humans—that they cannot memorize vast collections of text related to test questions, and when they answer questions correctly, they will be able to generalize that understanding to new situations—are not yet appropriate for AI systems."

Currently, the realm of policymaking widely uses the term Artificial Intelligence. For example, the United Kingdom’s Information Commissioner’s Office (UK ICO), uses the umbrella term AI because it ‘has become a mainstream way for organisations to refer to a range of technologies that mimic human thought.’
Article 5:2 of the proposed European Union Artificial Intelligence Act equally employs the term ‘Artificial Intelligence’ to describe a “fast evolving family of technologies that can bring a wide array of economic and societal benefits across the entire spectrum of industries and social activities”. The second edition of Singapore’s Model Artificial Intelligence Governance Framework defines AI as “a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation, and/or classification)”.

However, using the term "intelligence" in referring to predictive statistical models misleads about the nature of these products and, more troubling, can contribute to offering a blank justification for their outcomes (intelligent decisions vs statistical estimates). Moreover, it can mislead about the computational processes leading to these outcomes, implying that some form of moral actor is responsible for decisions that are instead probabilistic and without any human intervention.

Contrary to industry norms, the OECD, in its AI Principles Overview, employs the addition of the term ‘systems’ to AI to define a “machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives”. Complementing this is the IEEE Position Statement that also employs the word 'system' to differentiate between human and artificial intelligence. Namely, “artificial intelligence involves computational technologies that are inspired by—but typically operate differently from—the way people and other biological organisms sense, learn, reason, and take action”.

As such, this submission recommends:
• To employ the term ‘system’ to AI to distinguish human made from actual human intelligence to avoid prescribing agency to systems that mimic human behaviour.
• To move beyond the scope of ‘weak’ and ‘strong’ definitions, as ‘weak’ AI may offer a broader range of applications which, in turn, can result in a broader range of risks, long and short-term.
• To develop a set of definitions, categorised by models, as opposed to a single ‘umbrella’ definition which may be ambiguous, misleading, and disregard future applications.

What potential risks from AI are not covered by Australia’s existing regulatory approaches? Do you have suggestions for possible regulatory action to mitigate these risks?

The rapid development of generative AI models has led to significant advancements in the field of text-to-image AI. These models have the potential to revolutionise creative industries by automating the generation of original artistic works based on textual prompts. However, the use of artists' names in prompts raises concerns regarding moral rights, including the right to be recognised as the creator of a work and the right to protect the integrity of that work.

This policy document aims to address these concerns by outlining the potential impact of generative AI models on moral rights and proposing guidelines to ensure that artists' rights are respected and protected.

Generative AI models, such as Midjourney and DALL-E, have demonstrated the ability to generate high-quality images based on textual prompts. These models can interpret the prompts and produce original artwork that may closely resemble the style of well-known artists. However, the use of artists' namesMoral Rights Concerns
The use of generative AI models in the context of text-to-image AI raises two primary concerns related to moral rights:

Right of Attribution
When an artist's name is used in a prompt, there is the potential for the generated image to be falsely attributed to the artist. This misattribution not only infringes upon the artist's right to be recognized as the creator of their work but may also damage their reputation if the generated image is of poor quality or inconsistent with their artistic style.

Right of Integrity
The use of an artist's name in a prompt may also result in the generation of images that distort or modify the artist's original work, violating their right to protect the integrity of their creations. This may occur if the AI model generates an image that combines elements from multiple works or creates a derivative work that does not accurately represent the artist's vision.

Proposed Guidelines
To address these concerns, the following guidelines are proposed for the use of generative AI models in text-to-image AI applications:

Explicit Consent
AI developers must obtain explicit consent from artists before allowing their names to be used in prompts. This ensures that artists are aware of the potential use of their names and have the opportunity to approve or reject their association with the generated images.

Clear Attribution
Ensure that generated images are clearly attributed to the AI model, not the artist, to avoid confusion and potential misattribution. This can be achieved through the use of watermarks, metadata, or other forms of labeling that clearly indicate the source of the generated image.

Respect for Artistic Integrity
Develop and implement guidelines and best practices for AI-generated images that respect the artistic integrity of the artists whose names are used in prompts. This may include avoiding the use of specific elements from artists' works, ensuring that generated images do not closely resemble existing works, and providing artists with the ability to review and approve or reject generated images before they are made public.

Conclusion
The use of generative AI models in text-to-image AI applications has the potential to transform creative industries. However, it is essential to balance the benefits of this technology with the need to protect and respect artists' moral rights. By implementing the proposed guidelines, it is possible to ensure that artists' rights are upheld while fostering innovation and creativity in the field of AI-generated art.

Should different approaches apply to public and private sector use of AI technologies? If so, how should the approaches differ?

Introduction
Corporations are often the sole repositories of the tech expertise required to understand (and assess) the models they deploy fully and with the pool of experts being relatively small—and because expertise depends on expensive computational facilities—there is a serious risk of a lack of independence among those with sufficient expertise to control, assess and regulate AI systems. Creating public and independently-funded agencies with access to both computational and data resources will allow the creation of independent institutions which could check on claims by corporate actors about their product and competently investigate a wider range of potential negative outcomes that corporations are unwilling to investigate.

Case study – Newsbots and generative AI as editorial process
While the potential of integrating generative AI into the news cycle presents advantages, there remain several areas of concern. Its use within the news cycle is especially helpful for summarising the key elements of news articles (see the Artifact app especially), writing suitable headlines, or even translating the news for niche and new audiences. From the perspective of the Australian Broadcasting Corporation, this aligns with their remit to use emerging technologies to keep marginalised groups informed and educated, while removing laborious tasks form its journalists. However, relying on AI to write and curate content in the same way an editor does poses significant issues including generating incorrect or bias information, omitting the option for a ‘right of reply’ for the subject of any given article or investigative journalism piece, and in many instances does not comply with the legal requirements of the institutions engaging with this activity.

Further to these news and media organisation complications, the integration of Newsbots in this process also complicates the integration of AI into the news cycle. Building on the moral and ethical issues outlined in the generative AI section of this submission, Newsbots integrate the conversational aspect of AI, which has been researched and proven to be not able to be automated well. In many cases, the reliance on human intervention outweighs the benefit of these seemingly ‘automated’ practices, which at this stage, rejects any potential benefit of speakers and chat bots operating as effective Newsbots built on AI. This aspect of editorial oversight will need strict guidelines in the coming years as the models demonstrate increasing capacity to undertake more complex activities through conversational journalism and to ensure the sources the bots reference are credible and reliable journalistic spaces.

Given the importance of transparency across the AI lifecycle, please share your thoughts on:

There is an inherent tension between safety/privacy and transparency. The industry claims that safety can only be guaranteed if models (especially generative models) are not open-sourced and if access to models by third parties is strictly regulated by the companies themselves. This replicates what happens in the context of the analysis of content distribution on social media, possible only for platforms' researchers or limited to the analysis of data selectively released by the same platforms. For this reason, the corporations developing, owning and marketing predictive statistical models can be vested in framing the AI debate as a debate about "safety".

The academic literature on trust indicates that trust closely correlates with trustworthiness (Hardin, 2002, 2006; Uslaner, 2002). In other words, public trust and confidence in AI will be strengthened if there is a perception that those developing and using it are trustworthy, and a decline in public trust and confidence in AI if it is seen to be abused. This indicates that binding AI Ethical Codes at an industry level will be a necessary but not sufficient condition for engendering public trust and confidence in AI.

Transparency requirements therefore need to be accompanied by meaningful accountability provisions, and effective negative sanctions for the misuse of AI. This means that industry self-regulation and other forms of “soft law” will be insufficient. There will be a need for government regulators to not only have rules concerning the use of AI, but meaningful penalties that they are prepared to apply to large organisations, in both the public and private sectors, for actions that abuse public trust in AI.

This will require independent government agencies that can take action against other arms of government. The model of anti-corruption commissions, and the recent Royal Commission into the Robodebt Scheme, provide examples of how this can work in practice.

What initiatives or government action can increase public trust in AI deployment to encourage more people to use AI?

Greater public use of AI is not an end in itself. Insofar as there is an “AI dividend” to society, as distinct from companies and agencies that can deploy it to generate new efficiencies in production or new products and services, it does not directly derive from more people making use of AI. It therefore differs from the “digital dividend” that was associated with adoption of the Internet.

Greater public trust in AI deployment will be the result of evidence of transparency, accountability and appropriate governance frameworks around its development and use. This must include a role for government regulatory agencies, and the application of legally binding sanctions for breaches of laws, codes and standards associated with AI deployment and use.

Do you support a risk-based approach for addressing potential AI risks? If not, is there a better approach?

As so-called AI systems are integrated into social, economic and political systems to support private and public functions (from targeting audiences to assessing welfare benefits), their potential long-term effects or reversibility cannot be accurately estimated. The widespread application of sophisticated, data-intensive, but often black-box statistical models will likely create a new class of complex socio-technical systems characterised by emerging behaviours and low predictability. In this sense, we suggest differentiating between the application of AI models within well-defined and generally well-understood technological domains (e.g. self-driving cars, medical surgery) and their application into socio-technical domains (e.g. dating apps, search engines, recruitment).

Applying AI systems to social-technical domains is inherently more hazardous because long-term effects depend not only on the technology but also on how people and organisations react and adapt to its implementation. AI systems, in combination with social systems, configure a complex risk - in which feedback loops generated from their interactions lead to complex, unpredictable, risks.

Impact assessment (Appendix C) exercises conducted before deploying AI products are important but necessarily limited in their capacity to foresee who and how will eventually be negatively affected. Continuing impact assessment exercises funded by the AI industry but run independently from them and in the public interest should also be established.

As such, this submission recommends:
• Continuous scenario assessment. Independent authorities should, on a continuous basis, interact with the system, simulating different realistic scenarios, with particular attention to scenarios involving vulnerable populations.
• Decision systems vs supporting systems. The continuous assessment of AI systems responsible for final decisions (e.g. credit scoring, policing, visa processing) should be prioritised - whether they are considered low risk or high risk.

What elements should be in a risk-based approach for addressing potential AI risks? Do you support the elements presented in Attachment C?

Risk-based approaches to AI can play a valuable role in preventing harm to users and society particularly if employed to evaluate whether AI will provide a useful solution prior to investment and deployment of these systems. The use of AI systems in socio-technical domains, including in cases where risks may evolve over time, where there is a risk of bias and discrimination or misuse, users of AI systems should also have a on-going duty of care to provide a baseline level of protection from harm that is legislated and enforceable. The UK’s Online Safety Bill provides an illustrative model for this type of duty of care.

Should a risk-based approach for responsible AI be a voluntary or self-regulation tool or be mandated through regulation? And should it apply to:

It is important that standards for minimising risk and harm are enforceable under the law—there must be real and substantial consequences and accountability for actors who do not properly assess and mitigate risks. As two decades of social media have shown, relying on self-regulation by industry through voluntary codes of conduct and other soft law arrangements, is insufficient for ensuring that digital technologies operate and are governed in a manner that protects the public interest. For this reason, if a risk-based approach is employed for regulating AI in Australia, it should be mandated through legislation. It should also apply to both public and private organisations because both have significant potential to cause harm through the deployment of AI systems.