Make a submission: Published response

#499
IAB Australia
15 Sep 2023

Published name

IAB Australia

Upload 1

Automated Transcription

Submission by the Interactive
Advertising Bureau (IAB) Australia

Safe and responsible AI in Australia -
Discussion Paper

Department of Industry, Science and Resources

August 2023

1
Contents

About IAB Australia ................................................................................................................................. 3
Executive Summary................................................................................................................................. 4
1. Introduction ........................................................................................................................................ 5
1.1 Overview of digital ad industry ......................................................................................................... 5
1.2 Approach taken in this submission ................................................................................................... 5
2. Use of AI and related applications in the advertising industry ........................................................... 6
2.1 Data-driven buying automation and programmatic advertising ...................................................... 6
2.2 Measurement of ad effectiveness, analytics and insights ................................................................ 6
2.3 Preventing ad fraud and malvertising management ........................................................................ 7
2.4 Dynamic creative optimisation ......................................................................................................... 8
2.5 Creative development....................................................................................................................... 8
2.6 Recommendation systems ................................................................................................................ 9
3. Regulating AI and related applications used in Advertising.............................................................. 10
3.1 Application of existing technology neutral laws ............................................................................. 10
3.2 Existing regulations that apply to advertising ................................................................................. 11
3.3 Critical importance of privacy law reform for use of AI .................................................................. 12
3.4 Approach to regulating new and emerging risks ............................................................................ 13
3.5 Transparency and explainability ..................................................................................................... 14
3.6 Identifying bias in advertising ......................................................................................................... 15
4. Conclusion ......................................................................................................................................... 16

2
About IAB Australia

The Interactive Advertising Bureau (IAB) Australia Limited www.iabaustralia.com.au is the peak trade association for digital advertising in Australia.
IAB Australia was established in 2005, incorporated in 2010 and is one of 47 IAB offices globally. IAB globally is the leading trade association for developing digital advertising technical standards and best practice.
Locally there is a financial member base of approximately 180 organisations that includes media owners, platforms, media agencies, advertising technology companies as well marketers. The board has representation from the following organisations: Carsales, Google, Guardian News & Media, Meta,
News Corp Australia, Nine, REA Group, Seven West Media, Yahoo.
IAB Australia’s charter is to grow sustainable and diverse investment in digital advertising in Australia by supporting the industry in the following ways:
• Advocacy
• Research & resources
• Education and community
• Standards
The Charter includes a focus on standards that promote trust, steps to reduce friction in the ad supply chain; and ultimately improve ad experiences for consumers, advertisers and publishers.

3
Executive Summary
• IAB thanks the Government for the opportunity to make this submission on behalf of the digital
advertising industry.
• The digital advertising ecosystem plays a central role in Australia’s economy and society. It is a
significant funding component of the internet, enabling the delivery of free online content, products
and services to all Australians. It grows businesses, supports 450,000 jobs, contributes $94 billion to
GDP and provides $55.5 billion annual consumer benefits.
• The broad range of technologies defined as AI systems in the Discussion Paper, ‘Safe and Responsible
AI in Australia’ (“the Discussion Paper”), are not new. They are increasingly becoming part of our daily
infrastructure. In an advertising context, AI and machine learning are integrated into industry practices,
including in the placement, delivery, creation and measurement of ads as well as the management of
ad fraud. Many critical functions within the industry would not be possible without AI.
• Much of the regulatory framework that applies to advertising is made up of technology neutral laws
that were intended to capture harms that arise regardless of the technologies involved. In IAB’s view,
in many cases existing laws will continue to be sufficient to address lower-level risks from AI
technologies which are currently in use. This is evidenced by the fact that regulators are increasingly
turning to the existing laws, including Australian Consumer Law, the Privacy Act, and numerous laws
and self-regulatory codes that apply to advertising of different products or services, to address harms
that arise from advertising where AI has been involved in some way.
• For new and emerging risks that arise from AI, IAB supports a risk-based approach to regulation. A risk-
based framework should consider the likelihood of harm as well as the severity of harm and prioritise
high-risk use-cases. Given the wide range of uses cases and technologies that fall under the broad
umbrella of AI, and that unique issues may arise in different sectors, we support a risk-based approach
which sectoral regulators can use to clarify how existing regulations apply to uses of AI or update
oversight and enforcement regimes as appropriate. ‘Minimal risk’ AI should not be restricted under the
framework, as is the case in other jurisdictions such as the EU.
• Ensuring AI is transparent and explainable will be important for identifying risks and potential harms of
AI, as well as for empowering users to make informed decisions, ensuring accountability and promoting
trust and confidence in AI. Transparency should be incorporated early into product design and
development processes in order to be most effective and should be appropriately balanced with
competing risk factors such as commercial confidentiality and security.
• The Discussion Paper identifies algorithmic bias as one of the biggest risks of AI. This is also a concern
for our industry as bias has been shown to reduce the effectiveness of advertising campaigns and
damage customer relationships. However, in our view, AI tools will also play an important role in
identifying and minimising biases. These tools are becoming increasingly common.
• As the key piece of legislation that regulates data (where that data is personally identifying), the Privacy
Act is an important piece of the puzzle in ensuring that AI is used safely and responsibly and that risks
of bias are mitigated. The validity and reliability of data used to train models for their intended purpose
relies on sufficient data being available. The Privacy Act review should not unnecessarily restrict the
use of data where that data is not personally identifying. This would lead to less data inputs being
available for purposes including those outlined in this submission, amongst others, therefore increasing
the risk of bias. Instead, privacy enhancing technologies (PETs) have a role to play in ensuring data is
used in a manner that is not personally identifying, and the law should encourage use of these.

4
1. Introduction
1.1 Overview of digital ad industry
The digital advertising ecosystem plays a central role in Australia’s economy and society. It enables the delivery of free online content, products and services to all Australians, grows businesses, supports
450,000 jobs and contributes $94 billion to GDP.
Digital advertising supports industry sectors including retail, finance, automotive, FMCG, technology and real estate, amongst others. It is an essential enabler of growth across Australia’s digital economy.
Total Australian digital advertising expenditure has increased from $3.1 billion in 2021 to now $14.2 billion, with the industry posting a growth rate of 2% in 2020, 36% in 2021 and 9% in 2022.1 Over 66% of advertising is now online.2
For Australian consumers, digital advertising has fuelled an expanding online ecosystem of information, news and entertainment content, as well as social and search services, free of charge.
Consumers highly value this. According to analysis commissioned by IAB, the average Australian consumer is willing to pay $544 annually to access currently free ad-supported digital services and content.3
The ad-supported online ecosystem also provides significant benefits to society more broadly. It connects communities, supports democracy through free access to news content, provides increased access to job opportunities, education and financial information in addition to entertainment content and supports a thriving second-hand marketplace. For consumers on annual incomes below $50,000, the value they attribute to content and services that are currently free was roughly double that of consumers with annual incomes of over $80,000.4

1.2 Approach taken in this submission
The discussion paper defines AI broadly as “an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming.”

While there has been heightened public discourse in response to the proliferation of publicly available generative AI services, there are already numerous technologies being widely used throughout the economy, including in the advertising industry, which fall under the broad umbrella of AI as defined in the Discussion Paper. This submission is focussed on the range of AI technologies and related applications currently being used in an advertising context, the vast majority of which will fall into the latter category. It is structured in 2 parts which set out:

• Current key uses of AI and related applications in an advertising context
• Our views on AI regulation, including application of existing laws and new and emerging risks.

1 PwC, Online Advertising Expenditure Report, 2023.
2 PwC, Ad’ing Value, 2022, 4-5.
3 Ibid
4 Ibid

5
2. Use of AI and related applications in the advertising industry
As highlighted in the discussion paper, AI and related applications are having a major impact across the economy and society.5 They enable tasks to be done at speed and scale that would otherwise be impossible. Advertising is a good example of this. Many technologies falling under the broad umbrella of AI as defined in the Discussion Paper,6 are integrated into advertising industry practices.

The advertising industry, like other industries across the economy has undergone significant change in response to developments in the technologies used in the creation, placement, delivery and measurement of ads, as well as in day-to-day tasks. In this section, we outline key examples of where
AI and related applications are being used in advertising. This is not intended to be an exhaustive list.

2.1 Data-driven buying automation and programmatic advertising
Over the last two decades, the technologies that underpin online advertising have fundamentally changed – from single dimensional web-based banner advertising to the emergence of ‘programmatic’ trading of advertising spots and data-driven targeting and buying automation.7
Initial variations of these buying automation technologies were first launched in the mid-late 2000s, with applications by both publishers and advertisers being rapidly adopted. By the mid-2010s this had become mainstream, accounting for over 40 per cent of display8 advertising buying for content sites in 2022.9
The group of technologies that enable this buying automation to occur,10 collectively known as “ad tech”, enable transactions to occur in seconds and can be used to manage entire digital advertising campaigns. They connect buyers and sellers of advertising, inclusive of transacting, delivery and measurement, across internet enabled media channels.

These advertising technologies use machine learning to:

• make predictions about consumer behaviour and purchasing patterns based on data inputs,
for example, to determine when a user is most likely to see a particular ad and engage with
it; and
• make automated decisions to about the most effective delivery and placement of those ads,
based on those predictions.

2.2 Measurement of ad effectiveness, analytics and insights
Measuring the effectiveness of advertising is fundamentally important to the industry – it is a form a quality assurance, providing confidence to advertisers in terms of their investment, and ensuring

5 Discussion Paper, Introduction.
6 Discussion Paper, Figure 1.
7 PwC, Ad’ing Value, 2022, 10-13.
8 Display advertising includes banner advertisements, video advertising, native formats, partnerships, sponsorships, emails, digital audio and podcasts.
9 PwC, Ad’ing Value, 2022, 13.
10 Ad Tech Services include: Demand Side Platforms, Sell side Platforms, buy-side ad servers, sell-side ad servers, buy-side ad networks, sell-side ad networks, sell-side ad insertion, data management platforms, verification vendor services. IAB Industry Working Group, 2022.

6
publishers and platforms are accountable in relation to the inventory they make available. It also enables the identification of ineffective ads or ad campaigns.

The measurement of online advertising has evolved significantly over the last decade or so as a result of the development of machine learning, often providing faster and less expensive options for a more diverse range of marketers. Modern measurement techniques rely on machine learning to use data points collected in relation to users’ engagement with online ads, estimate the impact of an ad on a user based on available data, and predict the most effective use of marketing budgets on that basis.
Current measurement techniques include:11

Attribution techniques

Attribution techniques are based on third party identifiers that can recognise a user to provide a more personalised experience, as well as track a user’s online activity for the purposes of measuring an ad’s effectiveness. While there are various attribution techniques, at least some of these use machine learning models to automate the process of defining a user’s activity or probable activity online.
Attribution techniques are expected to largely become redundant once browsers no longer enable use of third-party cookies.

Market-mixed modelling techniques

Modern Market-Mixed Modelling or modern MMM uses machine learning models to statistically analyse and estimate historical data inputs to predict how to most effectively invest in future marketing activities.12 As with all machine learning models, this requires sufficient data inputs in order for the outputs/predictions to be effective.

Other methods of estimating attention

Other methods of measurement are continually being developed. For example, recently technology has been developed which uses eye-gaze data from people who have provided opt-in consent, to build a machine learning model which estimates a person’s attention on an ad, and therefore the ad’s effectiveness, based on their eye movements.13

2.3 Preventing ad fraud and malvertising management
Machine learning models have a critical role to play in preventing fraud, including ad fraud, which can damage brand reputation, harm consumers, and lead to wasted ad budgets and lower effectiveness of ad campaigns for businesses.

Machine learning models enable detection of patterns, anomalies and potential threats that may not be picked up by humans – for example by bad behaviour or good behaviour modelling.

Ad fraud, like other categories of fraud, is constantly evolving and becoming more sophisticated. It can include conduct such as impression fraud, click fraud, invisible ads, ad stacking, cookie stuffing, identity stuffing, domain spoofing, fake leads, ad injection, click injection, ad hijacking, pixel stuffing, geo-targeting fraud, click flooding, app install fraud, app install farms, click farms, SDK spoofing, ad laundering, viewability fraud, click spamming, redirect attacks, cross-device fraud, bot traffic, device

11 https://iabaustralia.com.au/research-and-resources/audience-measurement-and-industry-ratings/
12 Ibid.
13 See https://playgroundxyz.com/aip; and https://www.amplifiedintelligence.com.au/media/2022-drum-award- win-for-ai-machine-learning/

7
fingerprinting, programmatic ad fraud and others. 14 These practices are explained in more detail in
IAB’s Ad Fraud Handbook,15 but in summary involve mimicking legitimate human behaviour; by infiltrating a user’s device; or by intentional manipulation of ad performance data.16

In order to prevent ad fraud, machine learning models can be used to:

• Analyse traffic patterns to detect unusual activity, for example, if there are an unusually high
number of clicks from a single IP address or device;
• Detect and block invalid traffic such as bots, clickfarms and malware;
• monitor ad campaigns and detect invalid traffic.

This would not be possible without AI capability which is able to detect patterns within massive amounts of data and online traffic.

2.4 Dynamic creative optimisation
Dynamic Creative Optimisation (DCO) uses machine learning to enable the creative in ads to be personalised to specific viewers, based on a consumer’s interests, previous ads viewed or other available signals, for example, weather, language, or location. Which creative elements are most relevant is determined probabilistically based on available data sets. By enabling the key creative elements of an ad are dynamically chosen through automated processes, DCO delivers more relevant ad experiences to users. 17

In 2022, 44% of agencies had started using DCO for some of the digital audio advertising creative up from 27%. Another 43% of agencies intend to use DCO in audio over the next year to increase efficiency for marketers and relevance for consumers. 18

2.5 Creative development
There are a range of tools available today that automate the process of creating an ad, and businesses across the economy are adopting these. These include both machine learning and generative AI tools to assist with tasks such as the design of ads, to create marketing strategies and plans, to create adaptive banners, to track ad performance; or a combination of any of the above.19

One example which has garnered a lot of media attention recently for its use in scams, is technology applications that enable the manipulation of appearance through generative AI, more commonly known as ‘deepfakes’. While the use of these technologies in scams has attracted a lot of media attention, they have a range of legitimate uses, for example, video games, entertainment, customer support, and art – and are increasingly becoming mainstream.20 There are also technical options available to assist to make the provenance of these more transparent, for example, through watermarking. In advertising, this technology has the potential to enable personalised ads to be

14 https://iabaustralia.com.au/resource/digital-ad-fraud-handbook/
15 https://iabaustralia.com.au/resource/digital-ad-fraud-handbook/
16
For example see https://techbeacon.com/security/how-ai-will-help-fight-against-malware
17
https://info.innovid.com/blog/ai-in-ad-tech-cutting-through-hype-harnessing-potential
18 IAB Australia Audio Advertising State of the Nation Report https://iabaustralia.com.au/resource/audio- advertising-state-of-the-nation-2023/
19 For example, Canva, Hunch, Creatopy, Hippoc, Motion, AdsGPT, ChatGPT, Firefly, Dashi, Trilingual.
20 'Deepfake is the future of content creation' - BBC News

8
created, conveying more relevant messaging and experiences to customers, for example, by changing the visual appearance of models and adapting languages to suit different markets.21

2.6 Recommendation systems
Recommendation systems use machine learning to process and rank data in order to make recommendations to consumers. They are in common use across online services including video-on- demand and streaming services, music services, shopping sites, dating sites and many others, many of which rely heavily on these systems to be able to deliver their services. They undertake real-time data analysis and evolve in the recommendations they make from the data that is processed over time, which takes into account a customer’s past browsing behaviour, search queries and purchase history. 22

2.7 Customer Support
AI-powered chatbots can use both machine learning and generative AI to provide real-time customer support, answer frequently asked questions, and engage with readers. They effectively provide conversational marketing platforms to help automate customer service, provide personalised recommendations, and assist with content discovery, enhancing user experience and increasing engagement. Again, these are now in common use across online platforms, including on websites for government services,23 insurance services,24 airlines,25 healthcare products and services,26 and many others.

In summary, AI is integrated into many aspects and critical functions of the online advertising industry.
Importantly, it is integrated in many everyday tools that consumers have come to rely on to make their lives easier, more enjoyable, or simply to save them time.

21 Ibid.
22 For example see https://www.linkedin.com/pulse/look-how-ai-based-recommendation-systems-changing-e- commerce--1f/
23 For example see https://www.servicesaustralia.gov.au/meet-our-digital-assistant?context=1
24 For example see https://www.cmo.com.au/article/683015/how-nrma-arlo-koala-chatbot-won-over-customers/
25 For example see https://www.qantasnewsroom.com.au/media-releases/qantas-launches-chat-bot-concierge- to-give-customers-travel-inspiration/
26 For example see https://www.defencehealth.com.au/Support/Health-HQ/2023/November/The-rise-of-the- chatbot-in-the-health-sector

9
3. Regulating AI and related applications used in Advertising
As outlined in the introduction, AI based on the definition set out in the Discussion Paper, covers a wide range of technologies and use-cases. However, many AI and related technologies are already being widely used in advertising and regulated under existing laws. 27

In this section, we consider the current regulatory framework, how it is being used to address AI and related applications, as well as how to approach new and emerging risks.

3.1 Application of existing technology neutral laws
As noted in the Discussion Paper, a number of laws and regulations that apply to advertising are technology neutral. That is, they were intended to capture harms that arise regardless of the technologies involved. In IAB’s view, in many cases existing laws will continue to be sufficient to address lower-level risks from AI technologies which are currently in use.

For example, subject to the current issues being considered in the Privacy Act Review, privacy laws apply to PI regardless of whether that PI was handled by AI in the course of an organisation using it or not. The Discussion Paper similarly points out that the ACL applies to all products or services supplied to Australian consumers, regardless of whether they incorporate or use AI or not.28 Similarly, existing regulations that apply content rules to an ad will continue to apply, regardless of the technologies used in the process of creating, delivering or placing that ad.

Regulators are increasingly turning to the existing regulatory framework to regulate AI. As the
Discussion Paper notes, the Federal Court case Trivago vs the ACCC is a good example of how the ACL, was applied to algorithmic decision making, ultimately finding Trivago had breached the ACL by giving consumers the misleading impression that they were getting the best deal or cheapest rates when that was not the case. The fact that Trivago had used an algorithm to display those rates did not impact on Trivago’s liability and the outcome in that case.29

Similarly, in the decision of Clearview AI v AIC,30 the AAT found that the sending of information by
Australian servers to an offshore server constituted collection of PI in Australia, under the Privacy Act; and that the automatic harvesting of images by Clearview AI’s WebCrawler from servers which held images of Australians and from servers located in Australia, for inclusion in its image library, constituted ‘carrying on business in Australia’.31

Another example is the AANA and Ad Standards advertising self-regulatory system, which has continued to regulate ads throughout the transition to online advertising and buying and selling automation. The AANA Codes cover all advertising including marketing material regardless of platform,32 and consistent with changing content consumption habits, consumer complaints about online ads, as well as cases heard by Ad Standards which relate to ads seen online, are also increasing.33

27
https://info.innovid.com/blog/ai-in-ad-tech-cutting-through-hype-harnessing-potential
28 Discussion Paper, Box 2.
29 Trivago misled consumers about hotel room rates | ACCC
30 Clearview AI Inc and Australian Information Commissioner [2023] AATA 1069.
31 http://www.austlii.edu.au/cgi-bin/viewdoc/au/cases/cth/AATA/2023/1069.html
32 https://adstandards.com.au/about/what-we-cover
33 https://adstandards.com.au/sites/default/files/review_of_operations_2021_final.pdf https://adstandards.com.au/sites/default/files/adstds_review-of-operations_final_web_version.pdf

10
Similarly, discrimination laws also apply to the use of AI. The Human Rights Commission’s Discussion
Paper on Human Rights and Technology noted that existing laws “apply to the use of AI, as they do in every other context. The challenge is that AI can cause old problems – like unlawful discrimination – to appear in new forms”.34 In the Human Rights Commission’s Final Report in the inquiry, in recognition of this, it proposed greater guidance for government and non-government bodies in complying with discrimination laws in the context of AI,35 as well as modernising the regulatory approach for AI to ensure principles such as accountability and the rule of law more effectively apply to the use and development of AI,36 rather than an overhaul of discrimination laws.

In our view, existing laws therefore can and should continue to be applied to AI technology. Given AI technology is evolving quickly, we would caution against introducing technology specific regulations where it is not required. Technology specific laws risk becoming outdated quickly and creating potentially inconsistent or overlapping obligations.

We would support a gap analysis being done to better understand which risks that arise from AI are not covered by the existing regulatory framework. Where there are gaps, consideration should be given to the most appropriate way to address the relevant risks. The focus should be on addressing high-risk uses cases which are insufficiently covered by existing regulations, rather than low risk technologies where they are already regulated under existing laws.

3.2 Existing regulations that apply to advertising
As noted in the Discussion Paper, Australia’s current approach to regulating AI relies on a combination of a broad set of general regulations that are technology neutral, sector-specific regulations as well as voluntary or self-regulation initiatives.37

As the Discussion Paper points out, there are a range of general regulations that would apply to risks that arise from AI (in the same way as they apply in any other context), including privacy laws,
Australian Consumer laws, criminal and anti-fraud laws and discrimination laws.38

In addition to these, there are also a range of sector specific laws that apply in an advertising context that regulate the content of ads and placement of ads, regardless of whether they were created, delivered or placed using automated technologies. Examples include: The Tobacco Advertising
Prohibition Act; Gambling advertising rules under the Broadcasting Services Act and associated codes and online rules;39 the Interactive Gambling Act;40 the Food Standards Code;41 the Therapeutic Goods
Advertising Code;42 the AHPRA guidelines for advertising a regulated health service;43 the ABAC
Responsible Alcohol Marketing Code;44 the AANA Codes including the Code of Ethics, the Children’s

34 https://tech.humanrights.gov.au/sites/default/files/2019-
35 https://tech.humanrights.gov.au/downloads?_ga=2.88587163.1898538171.1691229897-
520345683.1690680641 ; Recommendation 18
36 https://tech.humanrights.gov.au/downloads?_ga=2.88587163.1898538171.1691229897-

520345683.1690680641
37 Discussion Paper, 26.
38 Discussion Paper, 10.
39 https://www.legislation.gov.au/Details/F2018L01203
40 https://www.acma.gov.au/about-interactive-gambling-act
41 https://www.foodstandards.gov.au/industry/labelling/pages/nutrition-health-and-related-claims.aspx
42 https://www.tga.gov.au/therapeutic-goods-advertising-code
43 https://www.medicalboard.gov.au/Codes-Guidelines-Policies/Advertising-a-regulated-health- service/Guidelines-for-advertising-regulated-health-services.aspx
44 https://www.abac.org.au/about/thecode/

11
Advertising Code, the Food & Beverages Advertising Code, the Wagering Advertising Code and the
Environmental Claims Code;45 and the FCAI Code on advertising motor vehicles.46

There are also a range of other voluntary industry initiatives including a range of IAB technical standards which set a consistent global set of technical standards that help to promote transparency and brand safety, and protect against ad fraud; and guidelines such as the Australian Digital
Advertising Practices which were designed to promote trust in digital advertising.47

In summary, the potential harms that may arise from advertising, including as a result of use of AI and related technologies, are comprehensively regulated.

3.3 Critical importance of privacy law reform for use of AI
As pointed out in the Discussion Paper – rich, large and quality data sets are a fundamental input to
AI.48 AI relies on data from historical outcomes to make decisions about and predict future outcomes.
If that data is unavailable, incomplete, inaccurate or biased, that will be reflected in any outputs.

As the Privacy Act is the key piece of legislation that regulates data, where that data meets the threshold of ‘personal information’, it is a critical piece of the puzzle in ensuring that AI is used safely and responsibly. As noted in the Discussion Paper, a key risk identified by the National Science and
Technology Council (NSTC) Rapid Response Information Report on generative AI.49 was the validity and reliability of data used to train models for their intended purpose.50 If the Privacy Act limits the use of data even where that data is not personally identifying, this will lead to less data inputs being available for purposes including those outlined in this submission - measurement, analytics, research, and providing relevant ads and helpful customer recommendations to consumers.

The Privacy Act 1988 is currently under review, with the Government considering whether it remains fit for purpose – both in terms of protecting consumer privacy as well as ensuring the smooth functioning and development of online interactions and activities. In our view, getting this balance right will be critical to the advertising not only extracting the benefits from AI and related applications set out in section 2 above, but also, to ensuring that the risks that arise with AI being biased or providing false or inaccurate results, are minimised.

Our position in relation to the Privacy Act Review is set out in detail in our submission to that review,51 however we note briefly here some key issues IAB considers critical for consideration by this review.

Privacy laws should not regulate data which is not personally identifying data

The Privacy Act should not regulate data that is not personally identifying data, other than to ensure against misuse, loss or unauthorised re-identification. Non-identifying data inputs are critical for both extracting the benefits from AI, as well as ensuring that biased or inaccurate results, are minimised.

AI and machine learning models rely on sharing of de-identified and anonymised data

As outlined in section 2 above, advertising functions rely on machine learning technologies that require data to enable the models to make accurate predictions. The data that is ingested into these

45 https://aana.com.au/self-regulation/codes-guidelines/
46 https://adstandards.com.au/issues/motor-vehicle-advertising
47 https://iabaustralia.com.au/adaps-2020/
48 Discussion Paper, 2.2.
49 https://www.chiefscientist.gov.au/GenerativeAI
50 Discussion Paper, 8.
51 https://iabaustralia.com.au/guideline/iab-submission-privacy-act-review-report-2022/

12
machine learning models may come from a company’s first party data, may have been verified with data-partners prior to ingestion, to ensure it is accurate or to ensure there are no fake or fraudulent accounts, and/or may have been supplemented with third party data from a data broker.

In other words, data sharing plays an important role in building the machine learning models that are used in advertising. A clear distinction should be drawn between selling personal information on the one hand, and sharing data which is anonymised or de-identified. The latter is should not be restricted in the same way as the former, to avoid risks that may arise with AI such as inaccuracy and bias.

Data segmentation plays an important role in creating machine learning models used in advertising

Data segmentation also plays an important role in the creation and use of machine learning models for the advertising uses outlined in section 2 in this submission, including measurement and analytics, delivering relevant advertising, promotions and customer recommendations. Data segments are a critical input to train machine learning models to recognise and group future data/data predictions.
Machine learning models can then in turn take data ingested and divide it into segments, to assist with more relevant delivery and placement of ads in future.

AI and machine learning models will assist with Privacy Enhancing Technologies (PETs) and are key to balancing privacy with a functioning digital economy

Privacy enhancing technologies (PETs) have been specifically developed by industry to protect user privacy and are now commonly used. They enable use and sharing of non-identifying data, so that personally identifying data does not have to be used or shared. However, these rely on being able to ingest data into the models (for example, de-identified or anonymised data or data where a user has provided explicit consent). To ensure their use is not restricted or disincentivised, the privacy law framework should distinguish between PI and non-identifying data.

3.4 Approach to regulating new and emerging risks
IAB supports a risk-based approach to regulation of new and emerging risks which takes into account the likelihood of harm as well as the severity of harm, and that prioritises its focus on high-risk use- cases. A risk-based model forms the basis of the approach already taken in a number of countries, as highlighted in the Discussion Paper, and was agreed to as an approach by the G7 earlier this year.52

Whereas much of the existing regulatory framework applies to potential harmful impacts on consumers from business practices (including uses of technology), risk-based approaches to regulating
AI, including those in the EU and the US, are focused on principles-based risk assessment processes that can consider risks that arise, including from the design and development of AI products, services and systems.

Given the wide range of uses cases and technologies that fall under the broad umbrella of AI, and the unique issues that may arise in different sectors, we support a risk-based approach which sectoral regulators can use to clarify how existing regulations apply to uses of AI, or update existing oversight and enforcement regimes as appropriate.

We would note that regardless of the final form of any AI regulation, given that it will likely place additional technical and regulatory burden on entities implementing AI, it will be important to distinguish between different use cases. This appears to be the approach taken in the EU for example;

52 https://fedscoop.com/g7-nations-agree-on-need-for-risk-based-approach-to-ai-regulation/

13
the EU AI Act proposes to allow AI that is considered ‘minimal risk’ to continue without restriction, which it has indicated would include the vast majority of AI systems currently in use in the EU.53

In our view, many automated technologies used in an advertising context would also fall into this category. While it may be useful, where there is uncertainty, to have guidance from regulators in relation to how the existing regulatory framework applies in the context of AI technologies, minimal risk AI technologies should not be subject to unnecessary additional regulatory burden.

3.5 Transparency and explainability
In IAB’s view, transparency of AI systems – enabling them to be explained and understood, is critical to identifying harms and empowering users to make informed choices. It will also ensure accountability of AI providers. This will in turn promote trust and confidence in AI.

Transparency standards already form the basis of various regulatory frameworks around the world.
Transparency and explainability are also key principles of Australia’s AI Ethics Framework.54 As set out in that framework:

Achieving transparency in AI systems through responsible disclosure is important to each
stakeholder group for the following reasons:

• for users, what the system is doing and why
• for creators, including those undertaking the validation and certification of AI, the systems’
processes and input data
• for those deploying and operating the system, to understand processes and input data
• for an accident investigator, if accidents occur
• for regulators in the context of investigations
• for those in the legal process, to inform evidence and decision‐making
• for the public, to build confidence in the technology

Responsible disclosures should be provided in a timely manner, and provide reasonable
justifications for AI systems outcomes. This includes information that helps people understand
outcomes, like key factors used in decision making.

This principle also aims to ensure people have the ability to find out when an AI system is engaging
with them (regardless of the level of impact), and are able to obtain a reasonable disclosure
regarding the AI system.55

These principles are a good starting point. However, as we understand it, it is not always possible to determine how an algorithm or machine learning model came to a particular result, and, as AI becomes more advanced, this will only become more difficult. Transparency should therefore be incorporated early into product design and development processes, to improve this understanding, to enable better transparency to occur once the product is in use, and to identify risks early. This should be taken into account if further transparency rules are introduced and/or mandated.

We would also note that, in developing any further transparency rules, conflicting priorities should also be considered. For example, as noted in the Discussion Paper, the Privacy Act Review Report has recommended that entities be required to provide information about targeting, including clear

53 https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
54 Australia’s AI Ethics Principles, see https://www.industry.gov.au/publications/australias-artificial-intelligence- ethics-framework/australias-ai-ethics-principles
55 Ibid.

14
information about the use of algorithms and profiling to recommend content to individuals. As we noted in our submission to that review, while we agree in principle, this should be done in a manner that does not compromise confidentiality of those technologies, and would depend on the level of transparency required, to whom it was required to be provided and on what basis.56 Transparency should not, for example, require organisations to reveal full details about underlying code where that is commercial-in-confidence information. In any event, information is likely to be far more valuable to non-technical audiences if it is provided in a manner that is comprehensible.

Another conflicting priority worth noting is security. If transparency requirements are framed too broadly, it could give rise to security issues if safeguards are not put in place. For example, it could make it easier for bad actors to manipulate or exploit AI models, or to create scam ads.

3.6 Identifying bias in advertising
The Discussion Paper identified algorithmic bias as one of the biggest risks or dangers of AI.57 As it points out, this was a major focus of the Australian Human Rights Commission’s Human Rights and
Technology Report in 2021 (the AHRC Report),58 which considered that existing discrimination laws apply to AI, as they do in every other context, and did not recommend any legislative changes to the law.
The AHRC report described 'algorithmic bias’ as circumstances ‘where AI is used to produce outputs that treat one group less favorably than another, without justification’.59 It noted that this can occur because of problems with either the data inputs to the AI tool or the AI tool itself. It also noted that while bias is not unique to AI – it can arise in all forms of decision making - AI can lead to bias presenting in new ways, and to potentially obscuring or entrenching the bias.60
Increased use of AI and machine learning models means bias is something that will impact all industries across the economy. The advertising industry is concerned that bias can reduce the effectiveness of advertising campaigns and damage customer relationships as well as negatively impacting consumers. However, while the potential introduction of bias via use of AI technologies has been raised in the Discussion Paper, we think it is also important to consider the use of AI tools to test datasets, and identify and mitigate biases as well.61 These tools are becoming increasingly popular.62
For example, use of tools such as What-If, AI Fairness 360, Fairlearn, Local Interpretable Model-
Agnostic Explanations, and FairML, are increasing in the advertising industry.63

As noted above in section 4.1, the AHRC proposed greater guidance for government and non- government bodies in complying with discrimination laws in the context of AI,64 as well as modernising the regulatory approach for AI to ensure principles such as accountability and the rule of law effectively apply its use and development.65 If there are areas where the application of the law to AI

56 IAB submission, Privacy Act Review Report, 2023, 21.
57 Discussion Paper, 8.
58 Ibid.
59 https://humanrights.gov.au/our-work/technology-and-human-rights/publications/final-report-human-rights-and- technology, 106.
60 Ibid, 105.
61
https://github.com/Trusted-AI/AIF360
62 How bias in AI can damage marketing data and what you can do about it (martech.org)
63 Ibid.
64 https://tech.humanrights.gov.au/downloads?_ga=2.88587163.1898538171.1691229897-

520345683.1690680641; Recommendation 18.
65 https://tech.humanrights.gov.au/downloads?_ga=2.88587163.1898538171.1691229897-

520345683.1690680641

15
are unclear, existing laws should be clarified in preference to new technology specific laws being introduced.

4. Conclusion
IAB Australia thanks the Department for the opportunity to make this submission.

We look forward to working with the Government to ensure the regulatory framework supports safe and responsible AI, and appropriately mitigates any potential risks of AI.

16

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.