Make a submission: Published response

#489
Interactive Games & Entertainment Association (IGEA)
15 Sep 2023

Published name

Interactive Games & Entertainment Association (IGEA)

Upload 1

Automated Transcription

Submission to the Department of Industry,
Science and Resources

Response to safe and responsible AI in
Australia Discussion Paper

August 2023

We acknowledge the Traditional Custodians of Country throughout Australia and their continuing connection to the land and sea. We pay our respects to all Aboriginal and
Torres Strait Islander peoples, their cultures and to their elders past and present.
Introduction & Overview
About IGEA
The Interactive Games & Entertainment Association (IGEA) welcomes the opportunity to provide a submission to the Department of Industry, Science and Resources on the Safe and
Responsible AI in Australia discussion paper (‘discussion paper’). The discussion paper seeks advice on mitigating possible risks of implementing artificial intelligence (AI) in Australia and gaps in the current domestic regulatory landscape. As such, we are pleased to provide a submission on the regulation of the intersection of AI technologies and the video games industry.
IGEA is the industry association representing and advocating for the video games industry in
Australia, including the developers, publishers and distributors of video games, as well as the makers of the most popular gaming platforms, consoles and devices. IGEA also organises the annual Games Connect Asia Pacific (GCAP) conference for Australian game developers and the Australian Game Developer Awards (AGDAs) that celebrate the best Australian-made games each year. IGEA has over a hundred members, from emerging independent studios to some of the largest technology companies in the world.
Our industry prioritises the importance of providing a fun and safe gaming experience for our players. Our industry implements world-leading parental settings and controls that allow individuals and parents to prevent, restrict and monitor gameplay. These controls are effortless to set up and there is extensive guidance available online. While the National Classification
Scheme already provides age guidance as well as consumer advice around games with particular content, the industry has a world-leading approach to ensuring safe online environments, including a commitment to online safety technology, safety-centric game design, ongoing monitoring and evaluation, and comprehensive terms of service and codes of conduct to maximise safe online play.
Overarching comments
IGEA has a keen interest in the progression of AI in the video games industry from both a creative and regulatory perspective and IGEA has previously made a submission to the
Australian Human Rights Commission and World Economic Forum on AI, outlining the use of
AI in video games.1 This submission first outlines how video games and AI intersect. Then our submission addresses key parts of the discussion paper that are relevant to the video games industry, making recommendations after each section.
Generally, AI in video games is not a context that would benefit from or require additional regulatory oversight. The risks of AI outlined in the discussion paper largely do not directly correlate to the use of AI in video games, which as discussed below are largely of minimal risk to consumers. We are of the position that any regulation should be evidence-based, carefully scoped and limited to what is practically necessary. While AI in video games will unlikely require regulation, any regulation of AI generally should not inadvertently impact video games that contain or use AI technology. If specific AI regulation is to be established, we believe AI regulation should be industry-focused, particularly given the difference in harm and risk in video games in comparison to other digital industries.

1
https://igea.net/wp-content/uploads/2019/05/IGEA-submission-on-AI-and-human-rights.pdf

Page | 1
Video Games and Artificial Intelligence
Our submission urges the Government to consider and accommodate the specific and low risk use of AI in the video games industry. The use of AI in the video games industry is implemented to the benefit of players. Relevant to the video games industry, AI is used in relatively low risk ways including:
 AI-controlled ‘opponents’ to give the player in a single-player game a competitive
challenge. In many games, a player is able to select different difficulty settings to best
suit their skill level. Examples of games with this use of AI include sporting, action and
strategy games.
 AI-controlled ‘allies’ to enable the player to team up or cooperate with the AI against
another human player and/or AI-controlled player.
 AI is vital to enabling efficient pathfinding in games to determine how to get a non-
human controlled character from one point in an environment to another (e.g. on map-
based games). AI may also be essential for determining how a non-human controlled
character interacts with a human player, which helps to progress and enhance the
narrative of the game.
 The use of AI in video games may incorporate elements of machine learning to help
improve the player experience. For example, studying the behavior of human players
may help game developers to design AI-controlled characters that more closely
resemble human behaviour, giving players a more authentic experience even when
they are not playing with friends or family. AI and machine learning are also being used
by some video game companies to detect unfair player behaviour such as cheating,
therefore enhancing gameplay for everyone else.
 Trust and safety efforts in the video games are also bolstered by AI, such as Ubisoft and
Riot Games’ collaborative ‘Zero Harm in Comms’ research project which uses AI to
detect harmful content in in-game player-to-player communication.2
 Game development studios are also starting to use machine learning for game creation.
For example, many ‘AAA’ video games rely on large open worlds to provide exciting
spaces for players to explore. However, creating such large levels and worlds is
resource-intensive when crafted completely manually. As a result, developers are
starting to leverage large maps of real-world terrain to help AI learn how to create
realistic and interesting terrain automatically, giving artists and level designers a head
start on creating even bigger and better worlds.
 AI can play a particularly important role in ‘serious games’, which involves the use of
games and game technologies in diverse sectors including education, health care,
defence, business, research and community. Not only can AI provide real world
simulations in, for example, disaster response games, it can be used to identify disease
or evaluate player progress in rehabilitation games.3

2
Ubisoft and Riot Games announce the “Zero Harm in Comms” research project to detect harmful content in game chats - Ubisoft Montréal
3
See Artificial Intelligence–Driven Serious Games in Health Care: Scoping Review - PMC (nih.gov)

Page | 2
 Finally, video games can be used to train AI much faster than humans, by running AI
through video games. Video games also provide an optimal context to test the general
intelligence of AI.4
AI use in video games is focused on entertainment, safety or innovation purposes, which is unlike most other contexts and use-cases. The intention of AI in the video game industry is to ultimately enhance player enjoyability, accessibility and safety. We therefore believe regulation of AI in video games is generally unnecessary. We believe it would be beneficial to implement further analysis and research on specific uses of AI in video games before embarking on any regulation.

Response to the Discussion Paper
Responses suitable for Australia
5. Are there any governance measures being taken or considered by other countries (including any not discussed in this paper) that are relevant, adaptable and desirable for Australia?
While we urge caution around the regulation of AI for video games and encourage the need for a strong evidence base before any action be undertaken, if regulation were to be imposed, a risk-based approach would be most aligned with the use of AI in low-risk environments such as creative and entertainment sectors.
The proposed European AI Act is arguably a model that is well-aligned with AI use in the interactive games and entertainment industry. As outlined in the discussion paper, the AI Act
“adopts a risk-based approach” to regulating AI, with different regulatory requirements for different use cases of AI categorised by minimal, limited, high and unacceptable risk.5 This approach under the AI Act means that different uses and applications of AI are differentiated, rather than regulated as one. This allows a sensible approach to minimal-risk AI applications such as in video games, whilst ensuring that higher risk use of AI is appropriately regulated or restricted. As provided in Attachment B of the discussion paper,6 the European Commission outlines that “AI-enabled video and computer games” are considered “[m]inimal risk” and therefore do not require mandatory obligations.7

Recommendation 1: That the Government consider, if necessary, a targeted and risk-based
approach to regulating AI similar to that of the European Union’s AI Act.

Target areas
10. Do you have suggestions for: a. Whether any high-risk AI applications or technologies should be banned completely? b. Criteria or requirements to identify AI applications or technologies that should be banned, and in which contexts?

4
See How video games can help Artificial Intelligence deliver real-world impact - AI for Good (itu.int)
5
Safe and responsible AI in Australia (Discussion Paper, June 2023) Page 17
6
Safe and responsible AI in Australia (Discussion Paper, June 2023) Page 39
7
Safe and responsible AI in Australia (Discussion Paper, June 2023) Page 39

Page | 3
The EU AI Act considers practices that exploit vulnerable groups such as children are categorised as AI of an unacceptable risk, and therefore are required to be banned.8 While such rules are sensible in principle, their implementation must be carefully considered and designed.
The video games industry is committed to the safety of everyone who plays games, particularly children, and the industry uses innovative and cutting-edge technology in an effort to ensure the safety of children. We are concerned that blunt-edged regulatory efforts to ban the use of
AI with respect to children may interfere with the safety efforts of the video games industry. For example, we note that machine learning can be used in anti-grooming efforts. A key benefit to the technology is its self-flagging mechanisms, meaning that it does not rely on a child, who may not understand what is happening or be reluctant to report an interaction, to flag an issue.9
We believe the use of AI in safety technology demonstrates the complexities in attempting to regulate such technology. For example, AI may be used to assist in biometric analysis and emotional inferencing for in-game moderation. This is an example of how AI can be used to ensure safety of users, while using biometric and emotional inferencing, which a regulatory model may consider ‘high risk’ AI. We also note that some governments are putting pressure on digital industries to employ or at least continue developing these technologies, including for the purposes of age assurance.

Recommendation 2: That the Government take a balanced and nuanced approach to
regulating AI tools or applications applicable to children, recognising that AI can be used
for positive and beneficial outcomes.

Recommendation 3: Before employing any regulation, the Government consult with
Industry on any possible regulatory conflict between the development and employment
of safety technology using ‘high risk’ AI.

Risk-based approaches
20. Should a risk-based approach for responsible AI be a voluntary or self-regulation tool or be mandated through regulation? And should it apply to: a. public or private organisations or both? b. developers or deployers or both?
Whilst we do not believe regulation of AI use in video games is necessary, if regulation were to be considered necessary for all uses, we suggest a voluntary, industry-specific and industry- driven model be considered before any legislative responses. From the context of our industry, given the unique nature and processes involved in creating video games, we anticipate that any prescriptive regulation, particularly if designed for universal application across sectors, would be difficult to implement and possibly ineffective.
Video games vary widely in their design, audiences and creators, meaning any form of regulation would have to cater to these unique features of the industry. We are concerned that prescriptive regulation will unnecessarily stifle innovation and creativity in the video games industry, an Australian industry, with the right government support, can grow to be worth

8
Safe and responsible AI in Australia (Discussion Paper, June 2023) Page 39
9
Microsoft - Entertainment Software Association (theesa.com)

Page | 4
$1 billion by 2030.10 Further, it is difficult to ensure prescriptive measures accurately reflect the rapid changes in digital technologies, particularly with the increased incorporation of AI in the video games industry. Therefore, if regulation were to be implemented, we suggest that a voluntary self-regulation model be used and that this model then be assessed/reviewed before the consideration of the need for any legislative approaches. This will also allow any future regulation to be well informed and evidence-based, critical to ensuring efficacy within the video games industry.

Recommendation 4: That the Government consider, should the regulation of AI generally
be found to be needed, implementation of industry-specific risk-based, voluntary regulation
of AI in video games instead of legislative approaches.

Intellectual Property
Whilst we acknowledge that intellectual property issues are not being considered in this discussion paper, the relationship between AI and IP is a significant issue not only for the video games industry but for the creative and artistic industry generally. Whilst there may be benefits to the use of generative AI for creative purposes, there are also serious questions about ownership and the rights of the owners of original works. For example, an image or character created using generative AI may have been trained on, and bear resemblance to, copyright- protected video game artwork. Not only does this raise critical questions about use of copyrighted material in training AI, but also fundamental questions surrounding ownership of
AI generated works. This is a policy area that is developing swiftly and must become a focus for the Government.

Recommendation 5: Consider the impact of AI on copyright policy and commence
dialogue with the creative sectors as a matter of priority.

10
See IGEA-Game-Engine-paper.pdf

Page | 5
Any questions?

For more information on any issues raised in this submission, please contact IGEA’s
Policy Officer, Sarah Deeb via policy@igea.net

For more on IGEA and what we do, visit igea.net or follow us on Twitter below:
IGEA: @igea
Game Connect Asia Pacific: @GCAPConf
The Australian Game Developer Awards: @The_AGDAs

Page | 6

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.