Make a submission: Published response
Published name
Upload 1
Safe and Responsible
AI in Australia
August 2023
Contents
1. Overview ........................................................................................................................................................................... 2
2. Key recommendations ................................................................................................................................................. 3
3. Artificial Intelligence in Australia ................................................................................................................................ 3
4. A principled response to AI ......................................................................................................................................... 5
4.1 Don’t focus solely on the technology........................................................................................................................... 6
4.2 Supporting Australian workers ....................................................................................................................................... 6
5. Existing laws already apply to AI ................................................................................................................................. 7
6. Addressing the gaps ..................................................................................................................................................... 8
6.1 Guidance on existing laws must be provided urgently.......................................................................................... 8
6.2 Regulatory sandboxes ....................................................................................................................................................... 9
6.3 Focusing on the opportunity........................................................................................................................................... 9
6.4 Government must manage trade-offs appropriately ............................................................................................. 9
Safe and Responsible AI in Australia 1
1. Overview
Australia faces a critical test in its approach to artificial intelligence (AI). AI is already widely used across Australia in ways which improve the lives of all Australians.
The continued use of these technologies will be central to Australia’s future prosperity, with generative AI alone already forecast to contribute up to $115 billion to the economy by 2030.1
Getting the approach right is critical for Australia. AI can deliver important advances across industries which will lift Australia’s productivity and improve rather than replace jobs. It has the potential to save lives through improved medical and health innovations. There are enormous positive opportunities that we should seek to enable and to capture the benefits.
That is not to say that we are ignorant of the potential risks. But we should not lose sight of the positives, by focusing only on the areas where there are concerns. It will be critical to strike the right balance between adopting the right guardrails while encouraging development, adoption, and use.
Australia’s existing sets of laws and regulations apply to AI, just as they do in any other circumstance. Australia has a mature legislative and regulatory environment that provides protections, guardrails, and avenues of recourse.
The problem is not that there are no laws. Governments, businesses and the community need a better understanding of how the existing set of laws apply in the context of AI (and its rapid evolution). Before adding new regulation or legislation, government must work with industry and academic experts to identify where there are material gaps in current approaches.
Clarity and stability of regulatory settings are a key consideration for businesses. Already, businesses are deciding against developing, investing in, or offering new products and services in jurisdictions where the regulatory environment is unclear. The government must not make it unnecessarily difficult for Australians to access the services and information they want, rely on, and need.
The BCA believes now is not the time to deliberately put unnecessary barriers in the way of Australia’s ability to seamlessly become a top five digital economy.
Instead, government needs to draw on the deep wells of expertise businesses and researchers have developed in AI to create, update, and issue guidance on how regulators will be approaching AI in their respective sectors.
This is why our recommendation for regulators to work closely with experts from businesses and academia to understand how AI is being used in specific sectors, and to develop clear guidance about how existing laws apply must be taken forward urgently.
This holds true not only for the Commonwealth: state and territory governments also hold many of the levers and have a pivotal role to play here. All governments need to work together to ensure a consistent regulatory environment across Australia.
If the Commonwealths does take forward any new regulatory responses, these must be risk-based, proportionate, and focused on outcomes, not solely the technology. Australia needs globally interoperable, durable, and flexible regulatory settings that set appropriate limits while encouraging Australians to leverage the transformative and productivity-enhancing benefits of AI.
Government must avoid adding more drag to the economy. Business investment as a share of GDP is near 30- year lows, capital is leaving Australia in net terms for the longest period since World War II, and more Australian direct investment is going to the United States than the US is investing here.
1
https://news.microsoft.com/en-au/features/generative-ai-could-contribute-115-billion-annually-to-australias-economy-by-2030/
Safe and Responsible AI in Australia 2
To lift productivity, improve our standard of living, and grow the number of high-paying, secure jobs, Australia must reverse these trends, including by maximising the opportunities presented by new technologies like AI. This will mean focusing not just on the risks, but also – critically – on the opportunities.
2. Key recommendations
The Business Council recommends:
1. Government prioritise bringing regulators, industry and experts from academia together to build
understanding of AI technologies and develop guidance on how existing regulations and laws apply to
artificial intelligence.
a. This could form part of the APS Reform work being led by the Finance Minister.
2. If any future regulation or legislation is contemplated, any first steps should be focused around
encouraging entities to disclose the governance and principles in place for their own use of AI.
3. Any new regulations must be risk-based, and:
a. adhere to well-established regulatory principles, including demonstrating a clear case for action,
proportionality, and providing the greatest net benefit to society,
b. focus on specific harms or desired outcomes, rather than the technology,
c. focus applications within each sector, rather than broadly at an industry level,
d. sufficiently flexible to adapt as technologies evolve,
e. promote regional harmonisation and consistency within the Asia-Pacific region to ensure
Australian businesses are not disadvantaged by unnecessarily onerous regulations, and
f. not conflict with other legislative or regulatory requirements imposed by government, such as
data minimisation, cyber security resilience, and privacy.
4. Government work with businesses on positive measures to encourage pro-innovation, safe, and
responsible development and use of AI, such as through regulatory sandboxes and incorporation of AI-
specific articles in Economic Cooperation Agreements and Free Trade Agreements.
5. Government work with businesses to ensure Australians are ready to take advantage of current and
emerging technologies like AI (including generative AI).
3. Artificial Intelligence in Australia
Artificial intelligence is already widely used across Australia. This is not a new phenomenon – while recent advances in generative AI have gained attention, organisations across Australia have been using AI to deliver improved services or more productive workplaces for many years.
The use of AI is not just isolated to a small number of ‘tech’ businesses. AI and automation have been used in automated vehicles in Australia’s resources sector, to deliver more efficient manufacturing, to detect and protect against scams, to prepare media reporting relevant to local communities, and to develop new medicines and medical devices, among myriad other examples.
Australian public and private organisations will need to continue to use new and emerging technologies like AI and generative AI to lift productivity and remain competitive. It has the potential to deliver products and services that will improve the lives of Australians and make the country more prosperous.
Safe and Responsible AI in Australia 3
For example, research recently released by Microsoft has shown the potential of just one form of AI, highlighting the potential $115 billion contribution generative AI could make to Australia’s economy by 2030.2
Our policy and regulatory frameworks need to support greater business adoption: encouraging businesses to access the best resources available globally and the right skills and talent to support the adoption of new technologies. Adopting these new kinds of technologies will support businesses in all sectors continuing to grow and create new, high paying jobs. But if government puts arbitrary and poorly thought-out barriers in the way, it hinders Australia’s already lacklustre productivity growth.
Indeed, as the BCA has pointed out repeatedly, where regulatory frameworks are inconsistent, confusing, or poorly designed it discourages businesses from making these investments and disadvantages Australian businesses, consumers, and the country.
The Discussion Paper suggests that regulatory responses don’t necessarily have a deleterious effect on adoption or innovation. It is true that well-designed laws can create certainty and trust.
But uncertain or poor regulatory environments in other jurisdictions, such as the EU, are already seeing products and services not being launched, investment decline, and innovation stymied.
The EU experience
The approach outlined in the Discussion Paper draws heavily on the European Union’s AI Act.
It would be a mistake for Australia to cut and paste the approach taken by the EU without careful
consideration. The new Act is creating duplication in the EU, and inconsistencies with existing EU laws
across a range of sectors.
The EU Act is also yet to be finalised – despite the Act’s passage through the EU’s Parliament, the final
content and wording is still to be negotiated across a range of European states and institutions
including regulators.
The EU’s approach is grounded in specifically European cultural mores and priorities, distinct from
Australia’s unique culture and heritage.
Moreover, the EU’s approach to regulation of new technologies – including AI – has come at a steep,
and likely disproportionate, cost. The introduction of the General Data Protection Regulation (GDPR)
cut into innovation, competition, and jobs – disproportionately affecting smaller firms – while having
only marginal or short-lived benefits to citizens.3
Expectations for the new AI Act in the EU are that the Act is similarly likely to reduce the
competitiveness of the EU in AI, drive investment offshore, and cut into innovation in the EU.4 Indeed,
major businesses have already decided against launching new products and services in the EU
because of regulatory uncertainty.
Australia should be wary of falling into this same trap. While there are some positive lessons that can
be drawn from the EU’s approach – such as the use of a risk-based approach – there are major pitfalls
that should be avoided, such as the focus on specific technologies over outcomes and
inconsistencies with existing requirements set by government.
Fortunately, Australia’s existing regulatory frameworks remains robust and well-placed to handle many of the challenges posed by AI.
2
https://news.microsoft.com/en-au/features/generative-ai-could-contribute-115-billion-annually-to-australias-economy-by-2030/
3
See, for example, Garrett Johnson’s December 2022 review of the economic literature on the GDPR: Economic Research on Privacy
Regulation: Lessons from the GDPR and Beyond https://www.nber.org/system/files/working_papers/w30705/w30705.pdf.
4
https://www.appliedai.de/assets/files/AI-Act-Impact-Survey_Slides_Dec12.2022.pdf
Safe and Responsible AI in Australia 4
The problem – for consumers, businesses, regulators, and policy-makers – is not that there are no laws. The biggest gaps are not in the existing (and large) stock of legislation and regulation. Existing laws apply to AI. The gap that needs to be filled is an explanation of how the existing set of laws apply in the context of AI.
4. A principled response to AI
If government is to regulate the use or development of AI, it will be critical to ensure the principles behind any intervention are well designed and articulated to create the necessary regulatory certainty for businesses to flourish. Regulation should differentiate the context, control, and uses of the technology and assign guardrails accordingly, and be based around any new harms not covered by the existing regulatory environment.
Any new or revised regulations should adhere to the well-established best practice regulatory principles. This means they should have a clear case for action to address a problem and be proportionate and provide the greatest net benefit to society. This will require having a clear understanding of the problems being addressed and demonstration that existing regulatory and legislative powers are insufficient.
These principles have been well articulated by the government’s Office of Impact Analysis. The Office has highlighted that new policies should:
clearly identify and define the problem to be solved
clearly identify a legitimate reason for government action
identify a range of genuine policy options
identify the net benefits of each option
explain the purpose and objectives of consultation
indicate the preferred option
discuss what success looks like and how it will be achieved5
If new laws or regulations are required, it is unlikely to be effective or efficient to have a single ‘AI Act’. The technologies and applications are diverse and different sectors have different motivations for using AI. Attempts to manage these with a single piece of legislation are unlikely to be successful and will only create more regulatory overlap resulting in conflicting, inconsistent, and confusing outcomes.
They are likely to be overly broad and capture far more than is intended. As noted above, businesses are already using AI in a range of ways. For example, while we support the government taking a risk-based approach, the proposed risk management approach set out in the Discussion Paper will require substantial revision.
The Discussion Paper’s draft risk framework places, for example, automated vehicles under the high-risk category, despite these vehicles operating in the resources and agriculture sector and on non-public roads safely for many years. Moreover, the requirement to have human intervention for self-driving cars appears self- defeating if the proposed control is, in effect, to require a human driver controlling the vehicle. Organisations who currently utilise aspects of autonomous vehicles are generally highly regulated from a safety perspective and, often, the drive behind use of this technology is to improve safety for workers by removing them from hazardous environments. Greater nuance is required.
Similarly, there is not a clear link between the controls in place for the draft risk framework in the Discussion
Paper in the ‘low risk’ category. It is, for example, unclear how or why people playing chess against a computer will require training or a general explanation (which in itself may create competition issues for game developers), or what sort of AI-specific harm will be prevented through training users about spam filters, recommendation engines, or the automation of business expenses.
5
https://oia.pmc.gov.au/resources/guidance-impact-analysis/7-impact-analysis-questions
Safe and Responsible AI in Australia 5
Fundamentally, government must work with industry to demonstrate what an acceptable risk appetite is for
Australia, across different sectors. Different industries will also have different risk tolerances or maturity levels to others. It must be flexible enough to meet new innovations and changes as they arise.
Government must take considered responses and where possible avoid knee-jerk reactions. The sudden decision to ban the use of ChatGPT in schools (with the notable exception of some jurisdictions, like South
Australia) is such an example, where actions were taken without the full consideration of the potential benefits or alternatives to managing these issues. Such reactive responses are unlikely to achieve positive outcomes over the long term and are much more likely to have unintended outcomes and stifle progress.
Instead, it is more likely to create inconsistency and incoherence in an already tangled and under-explained set of legislative requirements imposed on organisations seeking to innovate or use AI in Australia. It will be critical government ensure any new responses are aligned with and integrated with current legal frameworks.
4.1 Don’t focus solely on the technology
Focusing responses on the specific technologies of AI will entrench a legal system designed around technologies as they are now. This is dangerous and will trap Australia in a regulatory posture unable to keep up as new applications and uses of AI fast evolve.
Any new approaches must be focused on addressing any possible harms, rather than seeking to regulate specific technologies. If a harm is so bad that it requires legislative opprobrium, then the harm should be the focus, not just the AI version of it.
As an example, the paper suggests requiring notices be provided where automation or AI is used in a way that materially affects users. We agree that accountability is an important part of a well-functioning economy. But before taking this step, government needs to be clear how it expects this will benefit Australian citizens.
As the use of AI and automated systems becomes increasingly widespread across the economy, the most likely outcome will be that users disregard the notifications given the sheer volume of notifications they will receive, much as cookie notices placed on websites because of GDPR are now seen as nuisances. Driving this kind of notification ‘fatigue’ will just lead to users ultimately ignoring the notification to get to the service they want.
Further, government needs to be clear about how requiring notification that AI was involved in a business process will improve a person’s capacity to seek redress or review. If a decision-making process is flawed, it does not matter whether this is due to automation or a human decisionmaker.
While individual businesses can (and are) providing transparency to users about the AI models they use, a ‘one size fits all’ model of transparency will not be a panacea. It may be helpful consider ways to encourage businesses to share the governance and principles for the use of AI, similar to the model for modern slavery.
4.2 Supporting Australian workers
Government will also no doubt be urged to put regulations in place to ‘protect’ jobs by restricting the use of AI in certain areas. Government should resist these calls. Rather than driving unemployment, AI will instead change the nature of the individual tasks undertaken by workers.6 Indeed, it may augment some jobs, enhancing innovation and creativity. Microsoft’s research found that generative AI could improve productivity and add between $45-$115 billion to the Australian economy by 2030.7
6
This has been well documented, including by Felten et al (https://arxiv.org/ftp/arxiv/papers/2303/2303.01157.pdf) and in research conducted by AlphaBeta for the BCA (see: https://www.workingforthefuture.com.au/how_jobs_will_change).
7
https://news.microsoft.com/en-au/features/generative-ai-could-contribute-115-billion-annually-to-australias-economy-by-2030/
Safe and Responsible AI in Australia 6
This is not a feature unique to AI: jobs continuously change, with working Australians gradually adapting to change in their job as a matter of course. What we know is that jobs that experience more task change to adapt to innovation have less incidence of job losses.8
Government should be working with businesses to ensure that Australians are ready to take advantage of new opportunities technologies like AI bring.
The best way to protect jobs will be to ensure workers are ready for the changes we know are coming.
5. Existing laws already apply to AI
Concerns have been raised about a wide range of issues, including bias and misinformation, copyright and ownership, safety, competition, privacy, among many others.
In Australia, the development, use, and outcomes of AI are regulated by technology-neutral laws of general application. This is well-traversed and understood, and this submission supports the findings of the other reports which have set this out in greater detail.9 This remains appropriate: Australia has a mature legislative and regulatory environment that provides protections, guardrails, and avenues of recourse.
This technology-neutral approach ensures regulatory focus remains – appropriately – on the harms, not the technology.
What about Generative AI?
There has been substantial commentary about ‘different’ or novel applications of AI – like generative
AI (in simple terms, a form of machine learning that allows computers to generate new content, such
as text, images or videos).
But even in this context, existing laws remain capable of managing ‘new’ problems. Concerns raised
about generative AI typically fall into three ‘buckets’:
The training data (such as bias or inappropriately sourced training data)
User queries (such as users deliberately trying to generate offensive or ‘bad’ content)
The outputs (such as sharing of sensitive information or ‘hallucinations’ / factually incorrect
responses)
Each of these challenges are addressed under existing laws.
Problems within the training data are already covered by a raft of laws, including the Privacy Act (itself
undergoing reform), anti-discrimination laws (where it relates to a protected attribute), among many
others.
Issues arising from user queries – particularly where efforts are made to deliberately generate or
propagate illegal material that may cause harm are similarly covered by obligations under laws
covering work health and safety, abhorrent violent material, online safety, the criminal code, and
many others.
Finally, the ‘outputs’ or generated content are already required to not breach the Privacy Act or data
security obligations and Consumer Law obligations, among others.
8
AlphaBeta, https://www.workingforthefuture.com.au/how_jobs_will_change.
9
Such as the Human Technology Institute’s report on The State of AI Governance in Australia.
Safe and Responsible AI in Australia 7
It is also appropriate for laws to not try and anticipate and cover every single problem that AI may give rise to.
Some issues are better managed by individuals and individual organisations, such as users inadvertently sharing confidential information with generative AI models.
Similarly, government can work with businesses and other relevant organisations where solutions are being developed. This does not need to be through regulatory measures. Instead, by being part of conversations led by industry, government can play an important role in ensuring industry-led solutions meet community expectations. For example, content credentials standards can lift transparency and trust by providing ensuring creator attribution for digital content and helping users understand the origins and edits of the content they see and provide an automatic indication of AI-generated content. Government has a critical role to play in facilitating and recognising industry-led AI governance initiatives like these.
6. Addressing the gaps
Though existing laws already cover AI systems – including generative AI – there remains substantial gaps in
Australia’s regulatory environment: substantial, timely and fulsome guidance from Australian regulators about how these laws apply to AI.
6.1 Guidance on existing laws must be provided urgently
Filling this gap will be critical to giving all organisations and consumers confidence and trust that AI is providing positive outcomes for Australia.
Some regulators have already indicated they will look at how they will consider the application of AI in their specific contexts. In large part, guidance remains absent.
Providing this guidance cannot be done by regulators in isolation. Expertise in AI (and most new technologies) remains low in regulatory bodies and policy making agencies.
Instead, we recommend regulatory agencies work closely in collaboration with industry to build understanding of the underlying technologies and then to develop guidance on how existing regulations and laws apply. This could form one part of the wider APS Reform effort being led by the Finance Minister. There is precedent for this, with the Discussion Paper highlighting the example of the Australian Actuaries Institute partnering with the
Australian Human Rights Commission to develop guidance specific to AI and insurance pricing and underwriting.
Where sectors have mature regulators, this should take the form of ‘tiger team’ taskforces composed of experts from regulators, industry and academia, who are tasked to work in a time-bound fashion to develop regulatory guidance.
A potential candidate to pilot this project is the Therapeutic Goods Administration (TGA). As the
Discussion Paper notes, the TGA already regulates and provides guidance on software-based medical
devices. However, though the paper suggests this relates to AI, the TGA explicitly carved artificial
intelligence out of this guidance. There is a clear opportunity to deliver practical guidance for the use
of AI with medical devices.
There are few more critical areas than medical devices, where individual lives may be at stake. For this
reason, getting the regulatory guidance right will be vital. Drawing in industry brings to bear the
experts who understand how products are developed and who work directly with consumers and, as
most devices approved for use in Australia will already be approved by international regulators, bring
a deep understanding of international regulatory trends.
This includes the differing approaches taken in major markets such as the EU and US. The US is
already leaning into modernisation of its regulatory approvals systems for the use of AI in medical
devices. Currently, the algorithms underpinning new devices are effectively ‘locked’ once approval
Safe and Responsible AI in Australia 8
was given, requiring re-assessment when there is a change (hence undermining the entire purpose of
this technology). The US FDA is now consulting on a draft framework that will allow manufacturers
greater flexibility to predict changes and implement future modification without requiring additional
submissions where it is consistent with proposed Predetermined Change Control Plans.
The TGA is working as part of International Medical Device Regulators Forum (IMDRF) to develop
globally harmonised guidance on AI for medical devices. While we support efforts to create global
consistency the IMDRF process is being outstripped by the pace of change both technologically and
regulatory. By establishing a taskforce the TGA could be more agile in responding to developments
while remaining part of global efforts.
Guidance from Australian regulators is needed urgently. Without certainty in Australia, businesses across all sectors may delay investing in new technologies and innovations. This will place them at a disadvantage as their competitors overseas operating in environments with greater regulatory clarity are not faced with uncertainty.
6.2 Regulatory sandboxes
Further to this, to help promote research and development of AI technologies in Australia, governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems.
To this end, government must consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate.
This will mean creating and supporting regulatory sandbox environments, providing an environment where developers and firms can test innovative products or services.
As the OECD has highlighted, this can promote increased investment in start-ups, faster market entry and increased speed-to-market, and builds better policy making and regulatory processes.10
6.3 Focusing on the opportunity
The approach taken in the Discussion Paper (and indeed in much of the public commentary) has been around the risks of AI. But government must not lose sight of the benefits already flowing from AI – not just in the deployment and productivity enhancements that are evident across Australia, but also as a class of export.
Many Australian businesses are exporting their AI technologies overseas, and it is a growing contributor to
Australia's total exports.
To support and enable this, government should look at how it can prioritise updating Economic Cooperation
Agreements and Free Trade Agreements with Digital Economy Agreements, with specific articles on AI – similar to the measures put in place with the agreement struck with Singapore.
6.4 Government must manage trade-offs appropriately
Any changes also need to be harmonised with other significant legislative reform consultations – such as the
2023-2030 Australian Cybersecurity Strategy and Privacy Act Review.
For example, government may decide to attempt to regulate ADMs in response to the Robodebt Royal
Commission, including as contemplated in the Privacy Act Review. But as we have previously stated,11 it would be unhelpful to have privacy laws front-run whole of government approaches, particularly given the broad definition of ADMs could encompass any computer powered service or personalisation.
10
Regulatory sandboxes in artificial intelligence, https://www.oecd-ilibrary.org/science-and-technology/regulatory-sandboxes-in-artificial- intelligence_8f80a0e6-en
11
https://www.bca.com.au/report_from_the_review_of_the_privacy_act_1988
Safe and Responsible AI in Australia 9
Similarly, there is a clear gap between existing and proposed laws and government’s expectations for businesses to manage risks.
For example, while the Discussion Paper suggests businesses may need to manage for unwanted bias in an AI based system, this will be in tension with strengthening privacy outcomes for Australians. These two outcomes need to be balanced.
Businesses are already restricted from processing some of the data required to undertake the appropriate testing (as it relates to sensitive characteristics under the Privacy Act) and are considering the implementation of potentially stricter restrictions under several of the 116 proposals to reform the Privacy Act. The Business Council supports appropriate data protection and privacy laws and consider these to be one of the pillars of responsible
AI regulation. However, this brings trade-offs, as it may also mean there is a lack of reliable, representative datasets for many characteristics (such as political opinion, disability, or stigmatised medical condition), which could also make it difficult for AI developers or providers to demonstrate a lack of discrimination.
Moreover, even if these datasets were available and businesses could hold them, it will run contra to other expectations of government, including that businesses minimise the data they hold about Australians. The risks and costs of effectively compelling organisations to hold this data, even with state-of-the-art protections, are substantial – as we have pointed out repeatedly, including as part of the Privacy Act Review and the development of the coming Cyber Security Strategy. Data breaches affect even the most security-aware organisations. If there are many places where sensitive data is held, it is much more likely to fall into the hands of bad actors.
BUSINESS COUNCIL OF AUSTRALIA
GPO Box 1472, Melbourne 3001 T 03 8664 2664 F 03 8664 2666 www.bca.com.au
© Copyright August 2023 Business Council of Australia ABN 75 008 483 216
All rights reserved. No part of this publication may be reproduced or used in any way without acknowledgement to the Business Council of Australia.
The Business Council of Australia has taken reasonable care in publishing the information contained in this publication but does not guarantee that the information is complete, accurate or current. In particular, the BCA is not responsible for the accuracy of information that has been provided by other parties. The information in this publication is not intended to be used as the basis for making any investment decision and must not be relied upon as investment advice. To the maximum extent permitted by law, the BCA disclaims all liability (including liability in negligence) to any person arising out of use or reliance on the information contained in this publication including for loss or damage which you or anyone else might suffer as a result of that use or reliance.
Safe and Responsible AI in Australia 10