Make a submission: Published response

#493
Commercial Radio & Audio
15 Sep 2023

Published name

Commercial Radio & Audio

Upload 1

Automated Transcription

SUBMISSION BY COMMERCIAL RADIO & AUDIO IN
RESPONSE TO SAFE AND RESPONSIBLE AI IN AUSTRALIA
DISCUSSION PAPER
A. About CRA
CRA is the industry body representing the interests of commercial radio broadcasters
throughout Australia. CRA has 261 member stations and represent the entire Australian
commercial radio industry. 220 of CRA’s member stations are based in regional and rural
areas.

Commercial radio continues to dominate commercial listening:

• 88% of Australians aged 10 to 24 tune in weekly to commercial radio, for an average of
10 hours and 15 minutes per week;

• breakfast radio attracts nearly 8.62 million listeners; and

• 3.29 million listeners stream commercial radio weekly.1

The Deloitte Access Economics 2023 Connecting Communities Report2 highlights the
important economic and social contribution that our members make to Australia, through the
provision of radio and audio services. Our members deliver trusted, local content to
Australians all over the country.

As highlighted in the 2023 Connecting Communities Report, every year, our members:

• contribute $1 billion to GDP;

• provide a $320 million boost to regional Australia;

• produce 1.1 million hours of local content, across broadcast, streaming and podcasts;

• plays 160,000 hours of Australian music, or 2.7 million Australian songs – providing an
unrivalled platform for the promotion of Australian musicians;

• broadcast 42,000 hours of news and 2,200 hours of emergency service content; and

• provide 251,000 hours of locally significant content in regional communities.

Our members support 6,600 full time jobs – 38% of those roles are in regional Australia.

As set out on the 2023 Connecting Communities Report:

74% of Australians believe commercial radio and audio build a sense of community.

59% of Australians believe radio is a trusted source of news and current affairs.

B. Overview of submission
• Commercial Radio & Audio (CRA) appreciates the opportunity provided by the
Department of Industry, Science and Resources (Department) to comment on the Safe
and Responsible AI in Australia Discussion Paper (Discussion Paper).

1
CRA data available here: https://www.commercialradio.com.au/Industry-Resources/Media-Releases/2023/2023-07-12-
Commercial-radio-listening-surges-54-mi
2
Available here: https://www.commercialradio.com.au/RA/media/General/Documents/CRA-Deloitte-Connecting-
Communities-2023-Report.pdf?ext=.pdf

1
• CRA broadly supports a legislative solution. The use of artificial intelligence (AI),
including generative AI, provides exciting opportunities for Australia. As the Discussion
Paper rightly notes, while AI has benefits, it also brings significant risks. If Australia
wishes to be a leader in this sector, it must implement a regulatory framework that is
sufficiently robust to both promote responsible innovation and provide safeguards
against a broad range of individual and economic harms.

• Although it is appropriate that the Government is looking at a governance model for the
use of AI, the regulation of AI should be considered more holistically. The existing laws
that apply to AI do not address all areas of concern and further regulation of AI is
required, particularly in relation to copyright (which CRA notes is not the subject of the
Discussion Paper).

• The use of generative AI may be problematic for the commercial radio and audio sector.
Harms may arise from the use of the content of CRA members in the creation of
generative AI systems and also from the potential for generative AI systems to act as a
gatekeeper between Australian media companies and audiences. CRA supports the
ACCC being tasked to undertake an inquiry to investigate these concerns, and the
regulatory intervention required to address them.

C. Legislation is recommended
C.1 CRA broadly supports a legislative approach
The Discussion Paper highlights the existing economy wide, and sector specific, regulation that
would already apply in relation to specific AI applications, including for example the
Competition and Consumer Act 2010 (Cth) (CCA) and the Australian Consumer Law, which is a
Schedule to the CCA, the Online Safety Act 2021 (Cth), anti-discrimination legislation,
therapeutic goods regulation and more.

CRA agrees that governance arrangements should:

• Ensure the existence of appropriate safeguards – this is particularly important in the
context of an evolving sector, where AI applications (and automated decision making
(ADM) applications) may initially seem benign but may actually be high risk.

• Provide, to the extent possible, greater certainty as to compliance requirements. The
Discussion Paper highlights that this has benefits for businesses as it provides clarity as
to the types of behaviours that they may engage in, but it also provides benefits to
consumers and businesses that themselves engage with AI businesses, as they have a
clear understanding of what they should expect from those AI businesses and may take
appropriate action if those expectations are not met.

CRA broadly supports a legislative model, given the significant harms that might result from
high-risk AI activities. It is not appropriate that the large, international digital platforms, which
currently dominant the AI sector, should be subject to a self or co-regulatory regime in
relation to high-risk activities. It is also not appropriate to have different governance
frameworks applying to activities which have different risk levels. Accordingly, any
governance framework should be implemented through legislation.

2
The suggested approach in the Discussion Paper outlines six risk management requirements,
namely impact assessments; notices regarding AI use; “human in the loop” requirements (for
human involvement); explanations in relation to outcomes/decisions; training; and
monitoring.

While generally this approach is appropriate, as is the graduated application of the
requirements depending on risk levels, CRA notes that transparency is a key element that
must be given greater weight. Not only should it be clear to any person or entity when they
are engaging with AI technology, and how decisions are made using AI, but there also needs to
be greater transparency regarding the data that is used to train the relevant system, including
the large language models (LLMs) and multimodal foundation models (MFMs) on which any
generative AI technology is based. This transparency is important, irrespective of the risk level
of the technology. From the perspective of users, they should be aware of the source of the
information so that they may assess whether the information they receive is trustworthy. For
content creators, this is important to ensure that they can control when, and on what terms,
their content is used. Transparency should be a core element of the framework.

C.2 Other harms in the media sector should be addressed
While CRA is supportive of an overarching legislative framework to govern AI, there are
specific issues that impact commercial radio stations, and the media sector more broadly,
where additional regulation is required. These issues are discussed in the next section.

D. Protecting the professionally produced content of our members
D.1 Large language models and multimodal foundation models
Generative AI generates output, typically content but all recommendations or decisions, in
response to user prompts. As set out in Australia’s Chief Scientist Rapid Response Information
Report Generative AI: Language models and multimodal foundation models (Research
Report) that supports the Discussion Paper, generative AI is based on LLMs and MFMs.

LLMs and MFMs are trained on vast quantities of, in the case of LLMs, text and, in the case of
MFMs, vast quantities of a wider range of content, including speech and other audio content.

The vast quantities of content that have been used to train the early examples of generative
AI chatbots, namely Open AI’s ChatGPT and Alphabet’s Bard, has been sourced from the
internet. While some limited independent analysis has been undertaken to determine at least
to some extent the content that has been used to train LLMs and MFMs,3 Open AI, Alphabet
and other AI companies do not disclose the source of the vast quantities of content that they
use to train their models. Therefore, there is no way for Australian media companies,
including commercial radio stations, to determine whether their online content has been used
for these training purposes, though it must be assumed that this has occurred, as discussed
immediately below.

D.2 Valuable content
Our members make significant investments in producing the content that they make available
to Australians. That content is broadcast and made available online. As well as the significant

3
For example, as undertaken by The Washington Post with the Allen Institute of AI investigated the websites used for
Google’s C4 data set, which is part of the data used by Google’s Bard.

3
investments made in news, including local news for stations in regional and remote areas, our
members invest heavily in other forms of audio content, such as podcasts – including for
example the very popular Hamish & Andy and Casefile True Crime podcast series.

In addition, our members are subject to a comprehensive regulatory regime, which imposes
strict requirements in relation to the content that our members produce for Australian
audiences. Commercial radio stations operate under licences issued by the Australian
Communications and Media Authority (ACMA) under the Broadcasting Services Act 1992 (Cth)
(BSA). In addition to complying with the BSA and the conditions in their licences, commercial
radio stations must comply with the Commercial Radio Code of Practice (Code), a co-
regulatory code that is registered with the ACMA. The ACMA monitors compliance with the
Code.

The Code includes important requirements for the content of commercial radio stations. For
example, it requires news to be presented accurately and impartially and requires clear
distinctions to be made between news and commentary or comment. Other content is also
regulated under the Code, for example, restrictions are imposed on content that is likely to
incite hatred, serious contempt or ridicule and advertising must be identifiable.

Reflecting the investment of our members, and the high standard of commercial radio content
(facilitated by a robust regulatory regime), there is no doubt that the content of our members,
such as news programs and podcasts, that is made available online would be used in training
MFMs. The valuable content of other Australian media companies would also be used for this
purpose (and in the case of media companies that provide text content online, in the training
of LLMs).

As mentioned above, at the current time, there is no visibility of what online content made
available by our members has been used in the training of MFMs. It should also be
remembered that at the current time there are no reasonable steps that commercial radio
stations may take to limit the ability of MFMs to access and use their online content for
training purposes. It is a key requirement of the online services that our members make
available to Australians that these are available for free, as applies also to commercial radio
broadcast services. This is because, as mandated under section 14 of the BSA, our members
generate their revenues through advertising, not through charging Australians for access to
their content.

D.3 Need for consideration of copyright issues: training using content
It would be expected that copyright laws would be effective to protect against the use of
copyright material for the training of LLMs and MFMs, allowing the holders of copyright to
both determine whether consent is provided for that use and whether to charge for such use.
In this context, it is clear why transparency is so important. Only where our members are
aware that their content is being used for these purposes will be possible for them to seek to
put in place commercial arrangements for that use.

However, globally, copyright laws have not been effective to ensure that consents have been
obtained, or payments made, to copyright holders. Several lawsuits have already been
commenced alleging breach of copyright in relation to generative AI in the United States. For
example, novelists have commenced a class action alleging breach of copyright against Open
AI, Getty Images has commenced proceedings against Stability AI (an open source generative
AI company) for unauthorised use of their images and a class action law suit has been

4
commenced against Microsoft and OpenAI on behalf of open-source programmers for the
unauthorised use of copyrighted code in ChatGPT and also in Microsoft’s AI assistant, Copilot.4

The Discussion Paper states that intellectual property issues, particularly copyright, will not be
considered. The Discussion Paper states that copyright issues will be separately discussed at a
Ministerial Roundtable on Copyright established by the Attorney-General. That roundtable
has held two meetings. The first meeting acknowledged that copyright in AI is a key issue.
That topic was not discussed at the second meeting, with the only comment at the second
meeting being that it would be discussed at another meeting later in the year. The
Roundtable discussions provide no indication that any regulatory steps will be taken to
provide protection to media companies, such as CRA’s members and others that have created
the content that is used in generative AI.

CRA’s submission is that this is an urgent issue that should be resolved as soon as possible.
The situation experienced in other jurisdictions such as the United States, with litigation being
commenced that will take many years to work through the courts – resulting in significant
costs for all parties and ongoing legal uncertainty as to the scope of existing copyright
protections – should not be allowed to be replicated here in Australia. Regulatory
intervention will be required to ensure that our members are entitled to control whether their
content is used and, if consent is given, the terms of that use including payment.

D.4 Role of generative AI as a “gateway”
Generative AI can act as a gateway to information. There is no transparency as to the content
that is used to train LLMs and MFMs. There is also limited transparency as to the source of
content that is directly used to respond to questions that are asked by individuals using
generative AI chatbots such as ChatGPT and Bard.

Evidence from ChatGPT to date is that it does use journalistic content to respond to users of
that service. CRA’s view is that this would also apply with other generative AI services, even if
it is not disclosed to users.

This creates similar issues as those that were examined in detail in the 2019 Final Report from
the Australian Competition and Consumer Commission’s (ACCC) groundbreaking Digital
Platforms Inquiry. As the Final Report set out, digital platforms have over time, become a
“gateway” between Australians and the online services that are offered by media companies,
including the services of commercial radio broadcasters.

Arguably the concerns created by the potential gatekeeper role that generative AI will play are
of greater seriousness than the concerns identified in the ACCC’s Digital Platforms Inquiry.
Search engine and social media services identify the news and other media content that they
provided access to, typically providing links to that content, meaning individuals may access
the original source if they wish. However, as generative AI provides complete responses to
questions, typically with no obligation to provide any attribution of the source, the
“gatekeeper” issue may become more problematic. This is because there will generally be no
ability to “click through” to the online services of our members – in fact, in most cases it may
not be known by users that the content used to respond was the content of our members.

4
These examples of copyright litigation, and other examples, are referred to in this Wall Street Journal article: https://www.wsj.com/articles/ais-growing-legal-troubles-section-230-publisher-class-action-
9efaf374?mod=Searchresults_pos2&page=1

5
Such an outcome would put further significant strain on the business models of commercial
radio stations, who rely on an advertising funded business model. Without Australians using
their services, stations will not be able to generate revenue through the sale of advertising.

D.5 Further consideration is required
Considering the concerns that CRA has raised in this section D, CRA urges that further analysis
of these media specific issues is undertaken to determine if a regulatory response is required.
The ACCC has significant knowledge of, and expertise in analysing, digital platform markets.
The ACCC is well placed to perform the analysis that is needed here.

The ACCC could be tasked to undertake this role under the 2020-2025 Digital Platform
Services Inquiry, which still has approximately 18 months to run. Alternatively, the ACCC
could be separately instructed to examine this issue in a short inquiry, limited to not longer
than six month. Whatever approach is taken, the key issues that should be considered by the
ACCC would be:

• The likely impact of generative AI on the media sector in Australia, both in terms of the
use of the high-quality content produced by this sector to train LLMs and MFMs, and also
by considering the role that generative AI models may play by acting as “gatekeepers” or
otherwise inhibiting access by Australians to the sites and content of Australian media
companies.
• Whether a market failure exists, or is likely to exist, that warrants regulatory intervention.
As in the case of the Digital Platforms Inquiry, this would include a consideration of
whether there is such a disparity in bargaining power between AI companies, such as
Alphabet, Meta, Amazon and Microsoft, and Australian media companies that balanced
commercial arrangements are not able to be reached in relation to the use of the content
produced by the Australian media companies. The ACCC should also assess whether this
inability to negotiate commercial arrangements is likely to have an impact on the
production by Australian media companies of the high-quality content on which
Australians rely.
• If a market failure is found to exist, or to be likely to come into existence, the ACCC should
assess whether new regulatory measures are required to address this and, if so, advise
Government on those proposed measures.
• Finally, consideration should be given to whether the transparency measures proposed
for the AI regulatory model outlined earlier in this submission will be adequate in relation
to the use of the high-quality content produced by the media sector or whether sector
specific transparency measures are required.
D.6 A final point on scams and disinformation
A significant risk in the ever increasing use of AI arises from its potential to create
misinformation and disinformation. The dissemination of that content carries with it the well-
understood risk of undermining Australia’s democracy and trust in Australia’s institutions. AI
models are also used extensively in online scam activities. The creation of the National Anti-
Scam Centre, within the remit of the ACCC, is a recognition of the growing problem in relation
to scams and the need for regulatory acting to be taken.

CRA supports additional sector specific measures to combat the use of AI to generate and
disseminate misinformation, disinformation and scams.

6
E. A requirement to act quickly
Australia has the opportunity to take a clear eyed look at generative AI and its potential future
uses and adopt an appropriate regulatory model, which may have economy wide and sector
specific elements. The regulatory model adopted should encourage the development of the
AI sector, while at the same time provide appropriate protections and have regard to
Australia’s specific environment and its role in the world economy.

While, as the Discussion Paper acknowledges, generative AI technologies and its applications
are likely to evolve rapidly, and there is not yet any international consensus on the
appropriate form of regulatory model, this does not mean that Australia can afford to sit by
and adopt a “wait and see” approach, essentially to allow harms to become apparent, or a
consensus international regulatory model to develop, before it takes action.

A wait and see approach has not worked well in other areas of digital innovation and
expansion, where delays in acting to address issues have made those harder to resolve. An
example of such difficulties was the introduction of the News Media Bargaining Code – which
faced significant resistance from dominant digital platforms. Acting now will allow the rules of
the road to be set to avoid problems arising.

Commercial Radio & Audio
August 2023

7

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.