Make a submission: Published response

#376
Centre for Culture and Technology (CCAT)
3 Aug 2023

Published name

Centre for Culture and Technology (CCAT)

Upload 1

Automated Transcription

Centre for Culture and Technology (CCAT)
Cur5n University

CCAT submission in response to the Safe and
Responsible AI in Australia Discussion Paper
July 2023

Acknowledgements: this submission has been prepared by Kai-Ti Kao, Ka5e Ellis, Eleanor
Sandry, Mike Kent, Tama Leaver and Stuart Bender from the Centre for Culture and
Technology, Cur5n University.

1. Who we are
The Centre for Culture and Technology (CCAT) is a research Centre in the Faculty of
Humani5es at Cur5n University. We partner with community, government, and industry to codesign socially just digital futures. Our research focuses on how cultural prac5ces are changing in rela5on to digital technologies and plaQorms in areas such as accessibility, health communica5ons, in5macy, social life, popular culture, knowledge produc5on, commerce, poli5cs, and ac5vism. At the heart of our work lies a commitment to inves5ga5ng the opportuni5es and challenges that arise from the rapid integra5on of digital technologies and media plaQorms into everyday life and culture for specific social groups such as people with disability, Indigenous people, children and youth, and people with diverse sexuali5es and gender iden55es.

We are mo5vated by the proposi5on that the study of culture, with its emphasis on iden5ty, meanings, rela5onships, power and values, needs to be beUer integrated with the study of media and digital technologies, including Ar5ficial Intelligence (AI).

Within CCAT our research on AI recognises both opportuni5es and challenges related to the integra5on of AI with our exis5ng communica5on strategies and social and cultural ins5tu5ons. For example, we recognise that AI has great poten5al to foster the inclusion of people with disability in health, jus5ce, educa5on, and housing.1 It is also an important tool within educa5on, but not without its challenges.2

Page 1 of 10
Our research calls aUen5on to how AI have been developed with par5cular concep5ons of
“human” in mind. The AI field, from the technology industry to scien5fic research, has been dominated by White men and their outputs both reflect and privilege their experience of the world.3 The resul5ng ways that AI func5on to exclude, marginalise, and discriminate against those who do not fit this mould has been well-documented.4

While AI is defined as “a collec5on of interrelated technologies used to solve problems autonomously and perform tasks to achieve defined objec5ves without explicit guidance from a human being”5 human interac5on is essen5al to the current discussion and way forward for AI governance.6 It is through this human interac5on that we ensure “the design of AI, its inputs, outputs and regulatory framework do not preclude en5re subsets of the popula5on from experiencing its benefits”.7 This is essen5al to establishing trust amongst the Australian popula5on.

2. Summary
Our submission response to the Safe and Responsible AI in Australia Discussion Paper can be summarised across five key themes:

1. Literacy
• A focus on AI literacy and public educa5on can help build greater trust in AI;
currently there is confusion over what AI can do and too many assump5ons
made about the poten5al of AI.
• A focus on digital literacy is therefore also important in helping to dispel
much of the mys5que that currently surrounds AI. Digital literacy can help us
recognise that many of the concerns and issues raised in rela5on to public
engagement with AI are not 5ed to specific technologies but are more deeply
embedded in society and have not been adequately managed in the past.

2. Risk
• There is an assump5on within this Discussion Paper that the risks related to
AI have already been iden5fied or are foreseeable. It is important that we
recognise that it is not possible to predict all the future risks that are likely to
emerge through the use of AI, and some degree of flexibility will need to be
embedded within the proposed governance approach to account for these
unforeseen risks.
• The proposed risk management approach is far too broad and, in line with
other jurisdic5ons’ approaches, we advise that at least four risk categories
should be implemented (low, medium, high, and very high).

Page 2 of 10
3. Interoperability
• Given the range and variety of technologies that fall under the umbrella term
“AI”, it is important to ensure that defini5ons used in policy are consistent
across different governments departments and organisa5ons.
• Australia’s overall approach to AI governance should be mindful of what other
government departments, organisa5ons, and bodies are doing, both within
Australia and interna5onally.

4. Responsibility
• It is important that Australia’s approach to governing AI protects and ensures
culturally appropriate development, applica5ons, and uses of AI.
• When considering the issue of responsibility, care must be taken to ensure
that the wording of policies do not place responsibility inherently or
inadvertently with the AI technologies. Instead, there must be clear
recogni5on that responsibili5es are held by the people involved in designing,
deploying, applying, and managing AI, in par5cular where the AI is being
made available to others.

5. Opportunity
• Australia’s approach to governing AI should indeed be cau5ous of not s5fling
the opportuni5es offered by AI. AUen5on should be focused on looking at
opportuni5es AI technologies can offer for social welfare and the inclusion
and beUerment of marginalised groups, alongside how AI can contribute to
industry and economic growth.

3. Responses to ques7ons
Below is a more detailed response to specific ques5ons posed in the Discussion Paper.

Q2 What poten>al risks from AI are not covered by Australia’s exis>ng regulatory approaches? Do you have any sugges>ons for possible regulatory ac>on to mi>gate these risks?

The broad risk management approach reflected in this discussion paper assumes that risks related to AI are iden5fiable and foreseeable. Given that AI is rapidly evolving with new use cases and applica5ons being discovered daily, allowance should be made for some degree of
flexibility that adapts to unexpected uses and risks.

We also believe that the three risk levels proposed in the Discussion Paper are inadequate for capturing the range of current and poten5al risks posed by AI. We recommend that, in

Page 3 of 10
line with the approach taken by the EU and Canada, Australia employ a four-risk level approach to managing AI (low, medium, high, and very high).

Q3 Are there any further non-regulatory ini>a>ves the Australian Government could implement to support responsible AI prac>ces in Australia? Please describe these and their benefits or impacts.

We recommend greater public educa5on focusing on both AI and digital literacy. Digital literacy refers to “the ability to search and navigate, create, communicate and collaborate, think cri5cally, analyse informa5on, and address safety and wellbeing using a variety of digital technologies”.8 As AI is a rapidly evolving technology it is difficult to provide clear benchmarks for what can be considered “AI literate”. However, we are in alignment with
UNESCO in viewing AI as “the basic grammar of our century” and believe AI literacy will encompass the ability to understand what AI can and cannot do, when and how it can be useful, and when and how it should be ques5oned.9

• AI Literacy
Currently, there is a great deal of uncertainty and confusion circula5ng about what AI
technologies are realis5cally capable of.10 The term “AI” has been used as an
umbrella to encompass a complex range of technologies from robo5cs to machine
learning and natural language processing. The type of AI that the general public
engages with on an everyday basis actually comprises what is known in AI circles as
“Narrow AI” or “Weak AI”. “Strong AI” by contrast includes the categories of
“Ar5ficial General Intelligence” and “Ar5ficial Super Intelligence” and is, as yet, not
considered to exist outside of theory and fic5on. Even more advanced versions of AI
that are available to us, such as self-driving cars and genera5ve AI tools like ChatGPT,
do not cons5tute “Strong AI”. However, media reports and public discussions that
use the umbrella term “AI” do not differen5ate between these different types of AI.

This is leading to confusion that is fostering and overexaggera5ng appraisals of AI’s
posi5ve and nega5ve impacts on society. As a result, we are seeing the circula5on of
common AI imaginaries such as: the idea that AI will free us from the tedium of
human labour, dras5cally improve healthcare and extend our lifespans, take our jobs
and create a crisis of unemployment, or destroy us by becoming sen5ent and taking
over the world. The circula5on of such imaginaries detracts from cri5cal
conversa5ons about AI’s current and actual impact on society, including discussions
about who benefits, who is harmed, and how our poten5al capabili5es are being
both afforded and s5fled. Greater public educa5on focusing on AI literacy can help
clarify understandings of what AI is and how it is currently being used. This can not
only help to build greater public trust and confidence in AI, but also help to mi5gate
any poten5al and as yet unforeseen risks.

Page 4 of 10
It is also important that news media outlets undergo AI literacy training. UK research
into media coverage of AI revealed that news media have a tendency to inflate the
possibili5es of these technologies, over-rely on industry figures as authorita5ve
sources, and uncri5cally accept the promises offered by these industry figures and
technologies.11 These tendencies are also being reflected in Australian news media
and are impac5ng the public’s percep5on, understanding, and trust in these
technologies. Greater AI literacy in news media will therefore have two benefits: a)
more accurate repor5ng of AI and its impacts on society; and b) further enhance the
AI literacy of the public via more accurate and realis5c communica5on about AI.

• Digital Literacy
It is important to recognise that many of the risks related to AI use, par5cularly in
rela5on to media and communica5on, are deeply embedded in society and not 5ed
to specific technologies. Similar risks have emerged in the past in rela5on to social
media and the internet (e.g. the “black box” of algorithms) but have not been
adequately managed. By focusing on public educa5on and digital literacy, we equip
ourselves beUer to manage the currently known and foreseeable risks, and beUer
insulate ourselves against the unforeseeable and unexpected.

Q4 Do you have sugges>ons on coordina>on of AI governance across government? Please outline the goals that any coordina>on mechanisms could achieve and how they could influence the development and uptake of AI in Australia.

In responding to this ques5on, we note a number of current government reviews on this topic as well as interna5onal interest in AI governance by organisa5ons such as the
Associa5on of Internet Researchers (AoIR). It is important that there is a focus on interoperability of defini5ons and regulatory approaches. Any policies or governance approach developed should also be mindful of what is happening elsewhere within Australia and interna5onally. The current Mul5cultural Framework Review, as well as the inquiry into the use of genera5ve ar5ficial intelligence in the Australian educa5on system, are both examples of ongoing policy ini5a5ves that will impact Australia’s broader governance and regulatory approach to AI.

Q9 Given the importance of transparency across the AI lifecycle, please share your thoughts on:
a. Where and when transparency will be most cri>cal and valuable to mi>gate
poten>al AI risks and to improve public trust and confidence in AI?
b. Manda>ng transparency requirements across the private and public sectors,
including how these requirements could be implemented.

Page 5 of 10
Transparency is vital at all stages of the AI lifecycle, from design and development through to implementa5on and use. There is currently a lack of public trust in both governments and industry regarding AI development, use and governance.

Public trust in AI can be improved by communica5ng informa5on in plain language and readily available formats that are accessible by all members of our popula5on including disability groups and non-English speaking communi5es. To this end we recommend an inclusive design approach to developing appropriate strategies for communica5ng informa5on about AI. We have elaborated on this further in our response to Q11.

The other element of transparency needed, par5cularly for genera5ve AI, is to be clear what data sources the AI has been developed from. This will be important for both public trust and understanding, but also business confidence in using different AI tools and the risks associated with poten5al li5ga5on from owners of the data on which the AI tools draw.

Q11 What ini>a>ves or government ac>on can increase public trust in AI deployment to encourage more people to use AI?

To build public trust in AI it is important that the public clearly understands what AI is and how it is being used. Therefore, our recommenda5ons for government ini5a5ves and ac5on are focussed around developing clear communica5on with the public about AI. We have three key recommenda5ons:

1. Communica5on about AI must be inclusive and accessible.
We recommend an inclusive design approach to communica5on strategies. Inclusive
design focuses on edge users or people overlooked in the design process (such as
people with disability or non-English speaking communi5es), to improve value for all
users. Our research on health communica5ons during the COVID-19 pandemic for
example found that when communica5on strategies were designed for people with
disability, the en5re popula5on benefited and were beUer able to understand and
trust important government messaging.12

2. Focus on building AI literacy and digital literacy.
As men5oned in our response to Q3, we recommend greater public educa5on that
focuses on AI literacy and digital literacy. This is in line with UNESCO’s
Recommenda)on on the Ethics of Ar)ficial Intelligence, which views AI and digital
literacy as important for ensuring effec5ve public engagement and par5cipa5on, and
vital to helping the public avoid undue influence from the misuse of AI or AI-
generated content as well as make informed decisions about their own AI use.13 Such
a public educa5on ini5a5ve will help build public confidence and trust in AI use,

Page 6 of 10
beUer empower the public in their engagement with AI technologies, and foster
greater resilience to as yet unforeseen risks posed by future AI developments.

As an example of why AI literacy is important, we refer to the recent instance of
Australian doctors and hospitals using ChatGPT to write medical notes.14 We note
that the Australian Medical Associa5on (AMA) have called for stronger AI rules and
healthcare-specific regula5on in their submission response to this Discussion Paper.
We would also add that greater AI literacy would have been a benefit in this instance,
helping both medical staff and pa5ents recognise the inherent privacy and security
risks, as well as the possibility that the material generated by the chatbot could be
incomplete, misleading, or wrong. In this instance, both medical staff and pa5ents
also needed to be aware that any data entered into ChatGPT will be used by its
parent company to further develop and commercialise future AI products, thus
posing a considerable ethical concern regarding the use of pa5ents’ informa5on.

3. Use of AI must be transparent and clearly stated.
Public trust in AI is more likely to increase when the public are aware of how and
when AI is being used. To this end, we recommend that future AI regula5on mandate
that uses of AI should be clearly stated as such, especially when the end result,
product, or service will impact the public.

We par5cularly advise such transparency of use in rela5on to AI-generated news
media content. The use of AI to create and publish news ar5cles, images, and video
has been rising in recent years. Recent revela5ons that News Corp Australia have
been publishing as many as 3,000 AI-generated ar5cles a week have highlighted the
extent of this prac5ce,15 which is likely to increase as more news organisa5ons
explore AI-generated content. This has significant implica5ons for public trust in
news media as AI not only circulate embedded biases but poten5ally spread
misinforma5on.16 More careful considera5on of the use of AI in news media is
warranted, however as a start we recommend that all news media organisa5ons in
Australia be required to clearly state when AI has been used to generate news
ar5cles and news-related content.

Q14 Do you support a risk-based approach for addressing poten>al AI risks? If not, is there a beTer approach?

A risk-based approach is func5onal but should not be the only approach to regula5ng and governing AI in Australia. Combining a risk-based approach with measurable targets for increasing public literacy in AI, for example, will help build a society more aware of and resilient to the risks associated with AI, as well as being prepared to embrace its poten5al.

Page 7 of 10
Q17 What elements should be in a risk-based approach for addressing poten>al AI risks?
Do you support the elements presented in ATachment C?

We support the elements suggested in AUachment C and par5cularly endorse the use of
No5ces and Explana5ons. We again highlight the importance of clear and accessible communica5on with the public and recommend that No5ces and Explana5ons should be made easily available in plain language and formats that are accessible by members of the popula5on with disabili5es or for whom English is not their dominant language.

Q20 Should a risk-based approach for responsible AI be a voluntary or self-regula>on tool or be mandated through regula>on? And should it apply to:
a. Public or private organisa>ons or both?
b. Developers or deployers or both?

We assert that the risk-based approach for responsible AI should be mandated through regula5on to ensure compliance, build public trust and confidence, and beUer ensure best possible prac5ce and use of AI. It should apply in both private and public organisa5ons, and to both developers and deployers.

4. Conclusion
Finally, in addi>on to recommending a four-level risk-based approach (low, medium, high, and very high), >ghtly integrated across all government sectors and keenly aware of interna>onal approaches, defini>ons and breakthroughs, we encourage the review go further and engage in an AI literacy program. It is important that we remember that rushed and hasty use and implementa5on of such technologies, as we saw with Robodebt, can have disastrous consequences. It is also vital that we recognise that these technologies are not only likely to have significant impacts on our people and society, but also our lived and natural environments. When considering the impacts of AI technologies, we should be mindful of the costs of the Climate Crisis we have and are currently witnessing in Australia and around the world. In order to make the most of the benefits that AI has to offer, we must ensure that our governance of AI and assessment of risk should incorporate the poten5al cost to lives, livelihoods, and living environment.

1
Ellis, K., Kent, M., & Peaty, G. (2017). CapBoned Recorded Lectures as a Mainstream Learning Tool. M/C
Journal, 20(3). doi:10.5204/mcj.1262
McRae, L. (2022). Ethical AI, criminal jus9ce and sign language: A literature review. Centre for Inclusive Design
2
Bender, S.M. (2023). Coexistence and creativity: screen media education in the age of artificial intelligence content generators, Media Practice and Education, DOI: 10.1080/25741136.2023.2204203
Ellis, K., Kao, K., & Kent, M. (2020). Automa9c Closed Cap9ons and Immersive Learning in Higher Educa9on.
CurBn University

Page 8 of 10
Bender, S.M. (2023). How Might We Co-Exist With Artificial Intelligence Content Generators in Subject
English? English Teachers Association (WA) Conference, May 13th, University of Western Australia.
3
Benjamin, R. (2019). Race aJer Technology: Aboli9onist Tools for the New Jim Code. Polity Press.
Cave, S., & Dihal, K. (2020). The Whiteness of AI. Philosophy & Technology, 33(4), 685–703.
h`ps://doi.org/10.1007/s13347-020-00415-6
Crawford, K. (2016, June 25). ArBficial Intelligence’s White Guy Problem. The New York Times.
h`ps://www.nyBmes.com/2016/06/26/opinion/sunday/arBficial-intelligences-white-guy-problem.html
4
Buolamwini, J., & Gebru, T. (2018). Gender Shades: IntersecBonal Accuracy DispariBes in Commercial Gender
ClassificaBon. Conference on Fairness, Accountability and Transparency, 77–91.
h`p://proceedings.mlr.press/v81/buolamwini18a.html
Colle`, C., Neff, G., & Gomes, L. G. (2022). The Effects of AI on the Working Lives of Women. UNESCO, OECD,
IDB. h`ps://unesdoc.unesco.org/ark:/48223/pf0000380861
West, S. M., Whi`aker, M., & Crawford, K. (2019). Discrimina9ng Systems: Gender, Race and Power in AI. AI
Now InsBtute. h`ps://ainowinsBtute.org/ discriminaBngsystems.html
Whi`aker, M., Alper, M., Benne`, C. L., Hendren, S., Kaziunas, L., Mills, M., Morris, M. R., Rankin, J., Rogers, E.,
Salas, M., & West, S. M. (2019). Disability, Bias and AI. AI Now InsBtute.
h`ps://ainowinsBtute.org/disabilitybiasai-2019.pdf
5
Hajkowicz, S., Karimi, S., Wark, T., Chen, C., Evans, M., Rens, N., Dawson, D., Charlton, A., Brennan, T., Moffa`,
C., Srikumar, S., & Tong, K. (2019). Ar9ficial intelligence: Solving problems, growing the economy and improving our quality of life. CSIRO Data61.
6
Sandry, E. (2023). HMC and Theories of Human–Technology RelaBons. In Guzman, A. L., McEwen, R., & Jones,
S. (Eds), The Sage handbook of human–machine communica9on. SAGE PublicaBons Ltd, h`ps://doi.org/10.4135/9781529782783.
7
Amin, M., & Reid, G. (2018). Prejudice in Binary: A Case for Inclusive ArBficial Intelligence. Retrieved from h`ps://acola.org/wp-content/uploads/2019/07/acola-ai-input-paper_inclusive-design_amin-reid.pdf
8
McLean, P., Oldfield, J., & Stephens, A. (2020). Founda9on Skills for Your Future Digital Framework. Australian
Government, Department of EducaBon, Skills and Employment. h`ps://www.dewr.gov.au/foundaBon-skills- your-future-program/resources/digital-literacy-skills-framework
9
UNESCO. (2022, February 23). UNESCO releases report on the mapping of K-12 ArBficial Intelligence curricula.
UNESCO. h`ps://www.unesco.org/en/arBcles/unesco-releases-report-mapping-k-12-arBficial-intelligence- curricula
UNESCO. (2021). Recommenda9on on the Ethics of Ar9ficial Intelligence. UNESCO.
h`ps://unesdoc.unesco.org/ark:/48223/pf0000380455
10
Bridle, J. (2023, March 16). The stupidity of AI. The Guardian.
h`ps://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-arBficial-intelligence-dall-e- chatgpt
Natale, S., & Ballatore, A. (2020). Imagining the thinking machine: Technological myths and the rise of arBficial intelligence. Convergence, 26(1), 3–18. h`ps://doi.org/10.1177/1354856517715164
Sartori, L., & Theodorou, A. (2022). A sociotechnical perspecBve for the future of AI: NarraBves, inequaliBes, and human control. Ethics and Informa9on Technology, 24(1), 4. h`ps://doi.org/10.1007/s10676-022-09624-3
11
Brennen, J. S., Howard, P. N., & Nielsen, R. K. (2018). An Industry-Led Debate: How UK Media Cover Ar9ficial
Intelligence. University of Oxford. h`ps://reutersinsBtute.poliBcs.ox.ac.uk/our-research/industry-led-debate- how-uk-media-cover-arBficial-intelligence
12
Goggin, G., & Ellis, K. (2020). Disability, communicaBon, and life itself in the COVID-19 pandemic. Health
Sociology Review, 29(2), 168-176. doi:10.1080/14461242.2020.1784020
13
UNESCO. (2021). Recommenda9on on the Ethics of Ar9ficial Intelligence. UNESCO.
h`ps://unesdoc.unesco.org/ark:/48223/pf0000380455
14
Taylor, J. (2023, July 27). AMA calls for stronger AI regulaBons aler doctors use ChatGPT to write medical notes. The Guardian. h`ps://www.theguardian.com/technology/2023/jul/27/chatgpt-health-industry- hospitals-ai-regulaBons-ama
15
Meade, A. (2023, July 31). News Corp using AI to produce 3,000 Australian local news stories a week. The
Guardian. h`ps://www.theguardian.com/media/2023/aug/01/news-corp-ai-chat-gpt-stories
16
Gal, U. (2023, June 23). Replacing news editors with AI is a worry for misinforma9on, bias and accountability.
The ConversaBon. h`p://theconversaBon.com/replacing-news-editors-with-ai-is-a-worry-for-misinformaBon- bias-and-accountability-208196
Leffer, L. (2023, January 17). CNET Is Reviewing the Accuracy of All Its AI-Wriaen Ar9cles AJer Mul9ple Major
Correc9ons. Gizmodo. h`ps://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151

Page 9 of 10
Page 10 of 10

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.