Make a submission: Published response
Published name
Upload 1
Safe and responsible
AI in Australia
AMAZON SUBMISSION
Introduction
Amazon is pleased to comment on the Safe and Responsible AI in Australia Discussion Paper.
Artificial intelligence (AI) and machine learning (ML) are potentially the most transformational technologies of our time and, at Amazon, we believe it is still early days for the technology. Recent advances in generative AI have provided the world with a glimpse of the new opportunities AI will create, but the full potential of the technology has yet to be unlocked.
Amazon’s perspectives on AI are informed by our dual role as a developer of AI technology and a deployer of AI tools and services. Our focus on AI and ML spans over 25 years - Amazon Web
Services (AWS) offers the broadest and deepest set of AI and ML services for cloud customers, empowering developers to build, train, and deploy their own ML models or easily incorporate pre- trained AI functionality. Likewise, ML drives many of the capabilities customers use when they interact with Amazon. For example, our e-commerce recommendations engine is driven by ML; the paths that optimize robotic picking routes in our fulfillment centers are driven by ML; and our supply chain, forecasting, and capacity planning are informed by ML. Billions of times each week,
Alexa, powered by more than 30 different ML systems, helps customers manage smart homes, shop, get information and entertainment, and more.
At Amazon, we believe the design, development, and deployment of AI must respect the rule of law, human rights, and values of equity, privacy, and fairness. We are committed to developing fair and accurate AI services and providing customers with the tools and guidance needed to build applications responsibly. We recognise that safe and responsible AI is the shared responsibility of all organisations that develop and deploy AI systems. We’re not developing technology for technology’s sake — every move we make in this vein is to improve the experience for our customers and our partners, with the conviction that success and scale bring broad responsibility.
As AI is increasingly used to automate decisions that have significant impacts on people’s lives, health, and safety, we recognise that government has an important role to play in promoting innovation and safeguarding the public. To that end, we support efforts to put in place effective risk-based regulatory frameworks and guardrails for AI and ML that protect human rights while also allowing for continued innovation and practical application of the technology.
As the Discussion Paper acknowledges, AI is already delivering significant benefits across the
Australian economy and society more broadly. It is one of the most transformational technologies of our time and provides huge opportunities to be a force for good and drive economic growth. AI allows us to make sense of our world as never before—and build products and services to address some of our most challenging problems, like climate change and responding to humanitarian disasters. AI is also helping industries innovate and overcome more commonplace challenges. In
Australia, Transport for NSW is using predictive modelling of patronage numbers across the entire transport network, enabling the agency to better plan workforce and asset utilisation, and improve customer satisfaction. Local healthcare organisations including CSIRO, Melbourne Genomics
Health Alliance, and Healthdirect are benefitting from AI and ML to accelerate research for therapeutic development and making informed decisions that lead to better patient outcomes.
1
But unlocking AI’s full potential will require building greater confidence among consumers and businesses. That means earning and maintaining public trust in AI systems.
Understanding the important need for public trust, Amazon works closely with policymakers around the world as they assess whether existing regulations remain fit-for-purpose in an AI era.
An important baseline for any regulation must be to differentiate between high-risk AI applications and those that pose low-to-no risk. The great majority of AI applications fall in the latter category, and their widespread adoption provides opportunities for immense productivity gains and, ultimately, improvements in human well-being. To earn public confidence in the beneficial impacts of AI, businesses must demonstrate they can confidently mitigate the potential risks of high-risk
AI. The public should be confident that these sorts of high-risk systems are safe, fair, appropriately transparent, privacy protective, and subject to appropriate oversight.
Our response to the Discussion Paper is grounded in the following recommended core objectives, designed to bolster Australia’s standing as a leader in responsible AI while promoting AI as an economic accelerant. Through this process, we believe the Australian Government can:
Enhance Australia’s leadership in AI and promote innovation. AI has enormous potential for
society and the economy, and it will be a key tool in tackling some of humanity’s most
pressing challenges. Government has the opportunity to promote Australia’s leadership in
the adoption of this technology, with a strong foundation built on responsibility and safety.
Future proof through flexibility. AI is a general-purpose technology that can be put to an
extraordinarily broad range of uses. Government should focus on ensuring that existing
principles-based, technology-neutral laws and sector-specific regulations remain fit-for-
purpose in an era when AI will increasingly be used in high impact ways.
Require the adoption of risk-based AI governance practices. While sector-specific regulatory
guidance will remain the most effective policy lever for mitigating risk, government should
also encourage the adoption of risk-based AI governance best practices more generally,
including the adoption of impact assessments for high risk use cases.
Establish technical grounding. Regulations should be grounded in evidence and have a
sound technical basis, and appropriately reflect the risks and as well as the benefits of
different technologies.
Prioritise global engagement and consistent international standards. Because many of the
opportunities and challenges related to AI are global in nature, it is vital for Australia to
focus its efforts internationally to align around interoperable policy solutions. As part of
these engagements, Australia should seek out opportunities to contribute to the
development of global technical standards, rather than creating standalone domestic
standards.
Keep it simple. Continue providing clarity and support to businesses on the fundamentals
of strong AI governance.
2
We welcome ongoing engagement with Government as it continues to navigate this complex policy area. For further enquiries about this submission, please contact Min Livanidis
(mxlivan@amazon.com).
Best regards,
Roger Somerville (somroger@amazon.com)
Head of Public Policy, Australia and New Zealand
Amazon Web Services
3
Contents
Introduction .................................................................................................................................................. 1
Australia’s AI opportunity ............................................................................................................................. 5
Safe and responsible AI in Australia.............................................................................................................. 6
What is AI? ................................................................................................................................................ 6
Defining responsible AI ............................................................................................................................. 7
Roles and responsibilities ......................................................................................................................... 7
Leverage existing regulatory frameworks................................................................................................. 8
Vertical vs horizontal approaches ............................................................................................................. 9
Risk-based AI governance ....................................................................................................................... 10
Safe and responsible by design ............................................................................................................... 10
Oversight and review .............................................................................................................................. 11
Taking a people-centric approach........................................................................................................... 12
Embrace and influence international standards and frameworks ......................................................... 13
Conclusion ................................................................................................................................................... 14
4
Australia’s AI opportunity
We are living in a unique moment of AI innovation. Like other transformative technologies of recent memory, such as the advent of personal computers, the internet, and smartphones, AI and
ML will change our lives in ways that are hard to anticipate, and to perhaps appreciate, until we have time with these technologies under our collective belts. AI and ML are potentially the most transformational technologies of our time. Which is why, for more than 25 years, Amazon has invested heavily in the development of AI and ML, infusing these capabilities into nearly every business unit.
The Productivity Commission describes AI technologies as “vital enablers of productivity”.
Advances in model architectures have given rise to large language models and multi-modal models that can perform a wide range of “generative” tasks across multiple domains. An MIT study on the impacts of AI augmented research and development showed a potential doubling of productivity growth, while another showed a 37% increase in productivity for mid-level professional writing tasks. Similarly, a study of developers using Amazon CodeWhisperer, an AI coding companion with built-in security scanning for finding and suggesting remediations for hard-to-detect vulnerabilities showed that tasks could be completed 57 per cent faster and were 27 per cent more likely to be complete successfully. These are just some early indicators of AI’s potential.
Despite this promise, few Australian businesses are leaning into AI technologies. The Australian
Bureau of Statistics reports that less than 2 per cent of businesses are using AI technologies, and less than 5 per cent are engaging in data analytics. Further, businesses using AI tend to be concentrated in the Information Media and Technology, or Professional Services sectors. It’s not only small and medium-sized businesses struggling with AI uptake. Adoption of AI is low even among larger businesses (200+ employees), at just 9.5 per cent. Compared to other countries in the OECD, use of AI technologies by Australian businesses is the fourth lowest, greater than only
Latvia, Slovenia and Hungary.
If Australian businesses were to simply catch-up to their OECD peers in terms of digital innovation, including AI capabilities, a recent AlphaBeta study estimates that this could produce over AU$315B in gross economic value over the coming decade. This study, however, stops short of measuring the potential impact should Australian businesses exceed this mark. Realising AI’s potential will require a concerted effort to develop skills, research, entrepreneurship, management capability, and innovation, that is complemented and supported by the Government’s approach to regulation. It seems impossible to think that any progress will be made addressing Australia’s grand challenges – transitioning to net-zero, an escalating health and ageing imperative, and turning around Australia’s productivity performance – without embracing the technology.
Importantly, just as Australia is looking to AI as a means to spur innovation and drive productivity, so too are countries across the globe. Over the long term, as much as 80 per cent of the difference in incomes between countries can be explained by the differences in the rate of technology adoption.
5
For these reasons, business leaders and members of the public alike are excited by the potential of the technology. But generative AI has also prompted concerns about the potential for misuse, and renewed calls for appropriate safeguards. As AI is increasingly used to automate decisions that have significant impacts on people’s lives, health, and safety, we recognise that government has an important role to play in promoting innovation and safeguarding the public. Without policy and regulatory settings that increase confidence and trust in AI technologies, there is a risk that
Australia’s competitiveness will suffer. To do this effectively, these policies should be risk-based, grounded in science, technically feasible, and be based on a common understanding of AI, its uses and benefits, and its risks.
Safe and responsible AI in Australia
Although public interest in AI has accelerated rapidly over the last few months, today’s advances are built on research that has been underway for several decades. From the 1940s, when Alan
Turing first described the concept of a computer that could learn from experience, AI has been a staple of everything from mathematical theory, to public policy discussions, to science fiction. Its potential is the subject of continuous artistic, philosophical, and scientific exploration. Now that
AI is becoming an increasingly sophisticated, ubiquitous reality, it makes sense that governments, society, and the private sector are acutely aware of its potential, and focused on realising its benefits and managing its risks. It’s equally essential that we do so from a common definition and technical grounding.
What is AI?
As the Discussion Paper acknowledges, there is no single agreed definition of AI, and the term encompasses a wide variety of technologies. Coming to a common definition of AI, alongside the alignment of standards and risk frameworks, is a key recommendation of the Forum for
Cooperation on Artificial Intelligence, of which Australia is a participant. As a first step, of the
Australian Government should ensure that policies are grounded in a common definition of AI that will facilitate international interoperability. As articulated by the Brookings Institute, international cooperation and alignment surrounding AI will have a multitude of benefits, including by enhancing responsible AI development and building trust.
A technically grounded, carefully tailored definition of AI that focuses on the unique aspects of the technology, rather than overly broad definitions that could capture traditional software programs, is necessary for effective regulation and broader corporate governance. Definitions of AI used for regulatory purposes should be general enough to capture the appropriate suite of technologies, and specific enough that they don’t inadvertently capture a much broader set of software applications and reduce the effectiveness of that regulation. The definition should also allow for a risk-based approach to regulatory and governance practice, as outlined in later sections. For example, a regulatory definition should focus on (1) machine learning, the technology that generally inspires transparency, bias, and explainability considerations; and (2) the performance of tasks normally associated with human intelligence or perception.
6
Recommendation 1: Government adopt a definition of AI that focuses on its unique attributes and allows for focus on the primary issues of concern around the use of AI technologies.
Defining responsible AI
What constitutes responsible AI is continually evolving. At Amazon, we believe the design, development, and deployment of AI must respect the rule of law, human rights, and values of equity, privacy, and fairness. Our AI services account for responsible AI across six key dimensions; while these are our key dimensions today, we will continue to iterate as the science and engineering of responsible AI matures:
• Fairness and bias. How a system impacts different subpopulations of users (e.g., by
gender, ethnicity);
• Explainability. Mechanisms to understand and evaluate the outputs of an AI system;
• Privacy and Security. Data used in accordance with privacy considerations and protected
from theft and exposure;
• Robustness. Mechanisms to ensure an AI system operates reliably;
• Governance. Processes to define, implement and enforce responsible AI practices within
an organization; and
• Transparency. Communicating information about an AI system so stakeholders can make
informed choices about their use of the system.
The way Amazon considers responsible AI is similar to Australia’s AI Ethics Principles. We encourage government to enliven these principles with a clear articulation of their applicability in a regulatory and corporate governance context. As noted elsewhere in our paper, we expect that the reduction of regulatory uncertainty and increased trust will support the growing adoption of
AI technologies.
Recommendation 2: At Amazon, we are committed to developing and using AI/ML services responsibly, and we believe the Australian government should require others to do the same.
Government could enliven its AI Ethics Principles with a clear articulation of their applicability in a regulatory and corporate governance context.
Roles and responsibilities
AI regulations must also account for the multiple stakeholders involved in the development and use of AI systems. The AI value chain is complex, with a single system often involving multiple developers, vendors, deployers, and users. Effective AI regulation must carefully allocate responsibilities on the entity that is best positioned to identify and mitigate the potential harms that could arise from the use of the system. Deploying AI responsibly – and achieving the goals of new AI regulation – requires action from actors all across that chain.
7
We recommend that AI governance frameworks distinguish between the main actors involved in any AI system and articulate the responsibilities for each actor. This typically includes a developer
(who builds the AI technology) and a deployer (who purchases the technology from a developer and includes it in a customer-facing system or product). As recognised by the OECD in their accountability principle on AI, developers and deployers are generally not able to make the same risk assessments of an AI system. The developer is frequently better placed to articulate the intended uses, performance expectations, and technical limitations of the AI system. The deployer, on the other hand, will integrate the AI into the services it provides to an end-user, and has the contextual use case and understanding of the potential harms that could arise as an outcome of that use case.
Recommendation 3: Government assign responsibility based on the relative ability of each actor in the AI supply chain to address specific risks, in particular by introducing the concepts of developer and deployer.
Leverage existing regulatory frameworks
Put into practice effectively, responsible AI should have the dual effect of increasing the uptake of
AI by increasing confidence in the technology, and enhancing public trust by reducing risk and minimising possible harms. To achieve this, it’s important that any regulatory framework recognises that AI is a general-purpose technology that can be put to an extraordinarily broad range of uses. Efforts to devise a comprehensive, one-size-fits-all regulatory framework are unlikely to be effective and fail to capitalize on robust legal frameworks already in place. Rather than pursuing a prescriptive “horizontal” approach to AI regulation, where all sectors and use cases are treated along similar lines, government should instead focus on ensuring that existing technology-neutral laws and sector-specific regulations remain fit-for-purpose in an era when AI will increasingly be used.
Organisations develop and deploy AI systems against the backdrop of an array of existing laws and regulatory requirements that help protect the public against potential unintended consequences.
The United States Department of Justice, Federal Trade Commission, Equal Employment
Opportunity Commission, and Consumer Financial Protection Bureau recently stated that “existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” In issuing the Joint Statement, these agencies affirmed that existing laws vest them with the authorities necessary to “protect civil rights, fair competition, consumer protection, and equal opportunity,” and to “monitor the development and use of automated systems and promote responsible innovation.” Similar assessments apply to the Australian context.
Existing domestic laws, including but not limited to the Corporations Act 2001, Privacy Act 1988, consumer protections, and anti-discrimination laws, have direct applicability to the regulation and governance of AI systems. The Human Technology Institute’s recent paper, The State of AI
Governance in Australia, comprehensively summarized these existing regulatory and governance
8
frameworks and their applicability to AI. We believe that augmenting these existing regulatory mechanisms with guidance for existing sector-specific regulators is the most efficient and effective way of implementing AI governance at scale.
Recommendation 4: Government should ensure that technology-neutral consumer protections and sector-specific regulations remain fit-for-purpose. As part of this process, relevant agencies should be required to evaluate whether AI poses unique challenges that create new or different risks that may create gaps requiring regulatory or legal reforms.
Vertical vs horizontal approaches
Of course, there will be instances where the use of AI will create novel questions that require adjustments to existing laws and regulations. The integration of AI into medical devices and autonomous vehicles are two early examples where regulators updated or issued new regulatory guidance to provide clarity for industry and safeguards for the public. For example, the
Therapeutic Goods Administration provides clear guidance on software based medical devices and helps prospective deployers understand whether their product is regulated while maintaining clinical safeguards, with more stringent requirements for higher risk products. In the UK, the
Centre for Connected and Autonomous Vehicles has produced guidance designed to articulate and realise the societal benefits of connected and automated mobility while ensuring safety and security; the result has been the UK becoming the first jurisdiction to approve a hands-free self- driving system.
Globally, there are clear choices being made on AI governance between more vertically orientated approaches that take sectoral context into more consideration (UK), or horizontally oriented laws that apply requirements based on their respective classifications of AI systems in levels of risk (EU) and impact (Canada). The above examples demonstrate the effectiveness of vertical, sector- specific approach, as it allows regulators who understand the subject matter to make informed determinations about the risk of using AI for specific purposes, rather than attempting to catch- all in a standalone regulation.
Recommendation 5: Government adopt a ‘vertical’, sector-driven approach to AI regulation and governance. As part of this approach, a statutory duty should be placed on regulators to conduct periodic assessments of how responsible AI dimensions apply in their context, while also considering opportunities to promote safe innovation.
Recommendation 6: Government should establish an expert advisory hub for AI policy within an existing department or agency. This hub would not act as a standalone regulator. Rather, its role would be to provide expertise on AI in support of existing sector-specific policymakers and regulators, and to support safe and responsible government deployments of AI. This hub could be modelled on the UK Government’s central AI risk function, which provides support in identification, enforcement, and monitoring.
9
Risk-based AI governance
Risks associated with AI are inherently dependent on context; because of this, regulations will be most effective when they target specific high risk uses of the technology. This view was reflected in the Australian Human Rights Commission’s landmark 2021 report, where it identified that clear accountabilities were required to ensure human rights are protected where AI is used in decision making. The Commission also observed that “…decisions are generally the proper subject of regulation, not the technology that is used to make those decisions”. While sector-specific regulatory guidance will remain the most effective policy lever for mitigating risk, government should also encourage the adoption of risk-based AI governance best practices more generally.
We believe the goal of government should be to capture and control specific risks associated with deploying some forms of high-risk AI, while generally incentivising and supporting the adoption of
AI more broadly. It should be a baseline expectation that organisations have performed impact assessments prior to deploying higher risk use cases to ensure they have accounted for the unique risks the system may pose, and implemented appropriate governance-based safeguards to identify and mitigate these risks. Developers and deployers of AI systems, as noted above, also have unique roles to play in the governance and development of safeguards surrounding AI systems. For example, developers of high-risk AI systems should document their internal efforts to evaluate and mitigate potential risks and provide documentation to their customers – such as AWS AI Service
Cards, described below – so that customers can make informed decisions about how to deploy the system in a responsible way.
In addition to requiring impact assessments for high risk applications of AI, government should also consider how it can drive adoption of AI management best practices that are aligned to emerging global standards. To that end, the government should ensure that any proposed regulatory requirements are interoperable with ISO/IEC 42001 for AI Management Systems. ISO
42001 is a comprehensive AI management standard that sets forth an auditable framework for implementing responsible AI goals and commitments, including accountability mechanisms for fairness, security, safety, and privacy. We encourage the Australian Government, if it has not done so already, to actively participate in the development of international standards relating to AI.
Recommendation 7: Sector-specific regulators, with the support of an expert AI advisory hub, should have responsibility for identifying higher risk use cases for their sector against principles- based guidelines.
Recommendation 8: Government consider aligning its responsible AI governance policies to
ISO/IEC 42001. We recommend the Australian Government, if it has not done so already, to actively participate in the development of international standards relating to AI.
Safe and responsible by design
Developers and deployers of AI systems should ensure such systems are built based on principles of safety and responsibility by design. AWS builds AI with responsibility in mind at each stage of our comprehensive development process. Throughout design, development, deployment, and
10
operations, we consider a range of factors including accuracy, fairness, appropriate usage, toxicity, security, safety, and privacy. We are also committed to providing customers with tools and resources to develop and use AI responsibly. In short, we are helping our customers transform responsible AI from theory into practice. Responsible use of AI technologies is key to fostering continued innovation, and AWS is committed to developing fair and accurate AI services. We believe government should encourage others to do the same, and should also encourage the use of trusted AI developers.
For example, AWS AI Service Cards deliver a form of transparency documentation that provide customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for our AI services. Amazon SageMaker Clarify detects and measures potential bias using a variety of metrics so developers can address potential bias and explain model predictions. And our
Responsible Use of Machine Learning Guide highlights key best practices and tooling that AI developers and deployers can use to mitigate risks across the lifecycle of an AI system. Another important element of safe and responsible AI regulatory and governance frameworks should be to promote the development of AI systems that are safe and secure throughout design and deployment.
Safety and security are a top priority for AWS; both are critical to ensuring the trust and confidence of our customers, and are foundational in how we design our AI products and services. Currently, we support 143 security standards and compliance certifications, including for data protection and privacy controls. Our recent commitments to the White House to advance the secure and responsible use of AI models that are overall more powerful than any currently released foundation models (FMs) are illustrative of that ongoing priority. The White House commitments are forward-looking and are aligned with AWS’s approach to responsible and secure AI development, and many of these commitments emphasise the important role security plays in responsible AI. All of our services go through rigorous security testing, and we maintain strict physical, electronic, and procedural safeguards to protect our systems. We work closely with the security community to further harden our products and systems, including making it easy to report vulnerabilities.
Recommendation 9: Regulatory and governance frameworks include an expectation for the safety and security of AI systems, with reference to Australia’s existing security and privacy frameworks.
We note that, with an update to Australia’s Privacy Act 1988 and refreshed cyber security strategy forthcoming, that these other areas of government policy should appropriately consider AI.
Oversight and review
As articulated earlier, the use cases and decisions surrounding the deployment of an AI system are the most consequential stage for a risk-based approach. But we recognise that responsible AI is an ongoing commitment. As part of ongoing oversight and review practices, a responsible AI governance framework could consider the inclusion of the following as suggested practices:
11
Confidence levels. It’s important to understand that many systems generate predictions of a possible or likely answer, not the answer itself. Confidence levels, if available, should be considered when reviewing outputs provided by the system.
Human review. Amazon agrees that, as the discussion paper outlines, there may be circumstances
– such as for some higher risk use cases – when having humans in the loop or involved in reviewing or monitoring an AI systems’ operations are important for minimising potential risks and supporting public trust and confidence. As a practical measure, human reviewers should be appropriately trained on real world scenarios, including examples where the system fails to properly process inputs or cannot handle edge cases, and have ways to exercise meaningful oversight.
Use case evaluation and testing. While evaluation and testing are important steps at the conception of an AI use case, they also form an important part of oversight and review mechanisms, including measuring the performance of the system against that use case. Testing should include not just the AI system itself but also the overall process it is a part of, including decisions or actions that might be taken based on system output.
Continuous improvement and validation. ML models can be subject to “concept drift,” where model behavior changes as a result of changes in users, environments, or data over time. This can be addressed with performance tests, to identify areas where additional data or development may improve a system’s performance. Monitoring for potential bias and accuracy, and for models performing as expected across different segments, is an important part of this process.
Ongoing education. AI is a constantly evolving landscape, and new techniques, technologies, laws, and social norms will continue to be developed and refined over time. It is essential that all parties involved with building and using AI systems stay educated on these issues and account for them in the design, deployment, and operation of their systems. We encourage all stakeholders in the field, and other interested parties, to contribute knowledge and relay their experiences and learnings to the broader community.
Recommendation 10: Governance frameworks include an expectation of ongoing oversight and review, with the level of oversight commensurate with the level of risk (that is, higher-risk use cases would involve more stringent oversight and review requirements, as opposed to lower risk use cases). These frameworks could include suggested practices.
Taking a people-centric approach
In order to unlock the potential of AI in the Australian economy and society, government must enable and promote digital literacy, skilling and responsible use of AI tools among children, students, educators, academic communities, and the wider workforce. AI and other emerging technologies will only be appropriately assimilated by the broader Australian economy if our educators, students and the wider population are aware of its potential and understand how to use it. Australian workers who use advanced digital skills are propelling the country’s growth - they
12
add an estimated AU$41 billion to the country’s annual gross domestic product (GDP). A 2021 study by AlphaBeta found that the average Australian worker will need to gain an additional seven new digital skills by 2025 to keep pace with technological change, and that Australia requires an additional 6.5 million newly-skilled and reskilled digital workers by 2025 to meet future demand for these technology skills. Building a future-ready workforce is key to overcoming hiring challenges and being able to rapidly adapt to future technologies, including AI.
Amazon takes a people-centric approach to responsible AI, and we encourage government to factor this in to a holistic policy approach. Our approach starts with education and doing our part to build the next generation of developers and data scientists in AI all – including people from backgrounds that have been underrepresented in tech – through scholarship programs and skills training. We offer the AI and ML Scholarship Program and a new, free bias mitigation and fairness course from AWS Machine Learning University – featuring over 9 hours of lectures and exercises.
Education extends beyond technology to people, process, and culture to build awareness for the value in building diverse teams, why responsible AI matters, and the role we all have to advocate for it. Responsible AI is not work that can be done in a silo – it is truly a multidisciplinary effort that requires technology companies, policy makers, community groups, scientists, and others to come together to tackle new challenges as they arise.
Recommendation 11: Government work with the tertiary education sector and appropriate industry partners to design and develop digital literacy and technology competency training within the teaching qualification and broader teacher accreditation and professional development frameworks. With AI increasingly a feature of modern education, as with cybersecurity we see a need for a digitally literate workforce attuned to the risks of modern technologies.
Recommendation 12: Create a National Advisory Group examining teaching and learning, and assessment practices, and current and emerging technologies, in Australia. Alongside continued review of the Australian Qualifications Framework and the Australian Curriculum and Assessment
Authority to ensure it is meeting current and future needs, we recommend the creation a National
Advisory Group co-chaired by nominated delegates from the Federal Department of Education and the National AI Centre’s Responsible AI Network.
Embrace and influence international standards and frameworks
Global challenges require global solutions. Because many of the opportunities and challenges related to AI are global in nature, it is vital for Australia to focus its efforts internationally to ensure alignment and influence interoperable policy solutions to the greatest extent possible.
Interoperable, trustworthy AI is a pillar of the OECD AI principles, as is ensuring a policy environment that opens the way for the deployment of trustworthy AI systems. As a signatory to the OECD principles, we encourage government to commit to engaging with international standards bodies to contribute to the development of global technical standards, rather than creating entirely standalone domestic standards. While we appreciate that some level of
13
localisation is necessary for domestic contexts, we encourage these customised elements to be by exception and principles-based.
Standards play a key role in ensuring a policy environment that opens the way for the deployment of trustworthy AI systems. They will be a kind of Rosetta Stone that will help companies translate domestic regulatory requirements into compliance mechanisms, including engineering practices, that are – largely – globally interoperable. AWS is actively engaged with organisations and standard bodies focused on the responsible development of next-generation AI systems including
NIST and ISO. As noted earlier, we think government should ensure that any proposed regulatory requirements are interoperable with ISO/IEC 42001. Government could also encourage the adoption of NIST’s AI Risk Management Framework, a voluntary resource designed to promote the trustworthy and responsible development and use of AI systems.
NIST is well recognised in Australia for producing actionable, risk-based frameworks. Notably, alignment to the NIST Framework for Improving Critical Infrastructure Cybersecurity has already been adopted by the Security of Critical Infrastructure Act’s Risk Management Program as demonstrating compliance with the subsection on managing cyber and information security hazards. Like NIST’s cybersecurity framework, the NIST AI Risk Management Framework is freely available and aims to harmonize efforts from other standards organisations. The NIST AI Risk
Management Framework is derived from the ISO-IEC risk management framework 31000:2018, and provides recommendations towards identifying and treating risks stemming from the use machine learning and artificial intelligence technology. It targets AI trustworthiness issues such as fairness, security, safety, privacy, robustness, explainability and data quality, and establishes a framework that incorporates organizational aspects such as leadership, governance, design, implementation, evaluation and continuous improvement that is active throughout the lifecycle of AI system development.
Recommendation 13: Government actively engage with international standards bodies and contribute to the development of a global approach to responsible AI, while localising or customising by exception.
Recommendation 14: Government encourage AI developers and deployers to align to recognised international standards and frameworks, with the NIST AI Risk Management Framework a comprehensive, freely available, risk-based approach to managing AI risks.
Conclusion
Conversations among policymakers regarding AI regulations are already active around the world and will only accelerate and deepen as the technologies continue to evolve. Amazon is focused on not only offering the best-in-class tools and services to provide for the responsible development and deployment of AI for our customers, but also continuing our engagement with policymakers to ensure that AI will be used responsibly and in a manner that is consistent with democratic
14
values. These include fairness, accountability, transparency, safety, and the protection of personal data.
Government has the opportunity to establish Australia as a leader in the development of AI that is safe, responsible, and effective. Accomplishing this requires the articulation of a comprehensive, pro-innovation policy framework that underscores the key role that technology-neutral consumer protections and sector-specific regulations will continue to play. Government should build on these foundations by outlining a sensible regulatory approach that drives the adoption of AI governance best practices, including the completion of impact assessments prior to the deployment of high-risk AI systems. By articulating a regulatory vision that eschews technical over- prescriptiveness, Australia can create a model that enhances the benefits of AI and safeguards the public.
We look forward to continuing the conversation with the Australian Government as to how we solve these complex policy challenges and make safe and responsible AI a working reality.
15