Make a submission: Published response
Published name
Upload 1
11 August 2023
Via email: DigitalEconomy@industry.gov.au
Woolworths Group submission in response to the Safe and Responsible AI in Australia Discussion
Paper
Dear Sir/Madam,
Woolworths Group Limited (Woolworths) welcomes the opportunity to make a submission to the public consultation on the Safe and Responsible AI in Australia Discussion Paper (the paper).
Woolworths was founded in 1924 and has a proud history of serving Australian communities. We are one of Australia’s largest retailers and private sector employers, with over 180,000 team members. We serve over 20 million customers each week nationwide across more than 1,250 Woolworths
Supermarkets, Woolworths Metro and BIG W stores.
This is supported by our extensive logistics and distribution networks which include 21 distribution centres across the country. We offer a range of consumer services and our WPay payments business facilitated over one billion card transactions in FY22. We also operate Everyday Rewards, which has
14 million Australian members.
In 2017, we created WooliesX, a business that combines the digital, e-commerce, data and customer capabilities of our Group. We also have a majority interest in Quantium, a global leader in data science, artificial intelligence and advanced analytics. Founded in Sydney in 2002, Quantium employs over 800 people across offices in Australia, the United Kingdom, South Africa, New Zealand and India.
In 2021, we created a new combined entity with Quantium, known as wiq, to develop data and analytics led solutions to enhance retail experiences for our customers. This submission incorporates
Quantium’s and wiq’s positions on the issues outlined in the paper.
We are already leveraging AI in a number of ways. We approach AI with the following principles in mind:
● it must deliver Better Experiences for our customers and teams;
● it must be guided by responsible and ethical use cases with consideration of user
accountability and reliability; and
● there must be transparency of outputs, data ethics, privacy and security.
As Australia’s most trusted brand, our approach is to build and adopt robust data and AI ethics procedures, practices and technology controls that grow trust. We are building confidence with our customers through our recently created Privacy Centre. The centre provides information about the way we collect and handle a customer’s personal information. We provide plain language on relevant topics, such as the techniques and technologies we use to improve our service, personalise experiences and make the content and promotions customers receive more relevant.
Woolworths Group Limited
ABN 88 000 014 675 P +612 8885 0000
1 Woolworths Way E governmentrelations@woolworths.com.au
Bella Vista NSW 2153
1
Summary of Woolworths’ key feedback
As outlined in the discussion paper, AI is already delivering significant benefits across the economy and society.
We reviewed the paper in the context of three guiding principles:
● Australians are known to be great adopters of technology and have grown to expect a
streamlined digital experience;
● Australians need to be reassured that AI technologies and systems are deployed in a safe and
responsible manner, and to that end, we hold ourselves to a high standard; and
● To enable individual-focused regulation that still allows customers to benefit from emerging
innovations, any future AI regulatory framework must be well integrated with broader cyber
and privacy regulation.
The best laid principles also need to be capable of practical application. For this reason, we recommend that the Commonwealth consider how, beyond identifying individual reform opportunities, practical and effective AI governance can be delivered - noting that multiple State jurisdictions have commenced their own consultations into the current and future extent, nature and impact of AI. In this context, we provide the following comments:
● Given the impact of AI on all sectors of the Australian economy and society, and concurrent
inquiries into AI currently being conducted by the NSW and SA Parliaments, the Federal
Government should consider what institutional arrangements will help regulators develop a
modernised and consistent approach to AI regulation. We suggest the Federal Government
lead this work through a newly established central office or advisory on AI technology.
● Whilst protecting the interests of Australians, regulation should act to support the continued
growth of a safe and responsible AI industry in Australia, noting many AI products developed
domestically, such as those developed by Quantium, have export potential;
● AI does not operate in a void, hence regulation should be technology-neutral and take a
risk-based approach, which is already seen in privacy regulation. The Privacy Act 1988 already
requires an organisation to take ‘...reasonable steps…to protect the personal information they
hold…’’, which implies a risk-based approach to protect different types of personal
information1; and
● Restrictions should be based on high risk activities, rather than individual technologies or
methods, consistent with frameworks in place for the regulation of non-AI technology. As
such, APP entities should be required to conduct PIAs for activities with high risk.
Specific comments on key themes outlined in the paper are detailed below.
1
OAIC, Guide to securing personal information
2
Specific feedback on individual proposals
Definitions
Woolworths Response: The consistent application of definitions across government guidance material is essential to ensuring the success of any AI governance. We support the holistic approach of the paper to regulate the ‘outputs’ of AI, rather than the technology itself, noting the spectrum, range and rapid evolution of AI technologies.
We agree with other definitions used in the paper for key terms including ‘Machine learning’,
‘Generative AI models’, ‘Large Language Models’ (LLM), ‘Multimodal Foundation Model’ (MfM) and
‘Automated Decision Making’ (ADM) and submit that they should be applied to the use of these terms across the whole of government. For example, this paper approaches ADM regulation for ‘automated decision making’, while the Privacy Act Review Report 2022, approaches regulation for ‘substantially automated decision making’. Harmony of definitions will facilitate regulatory clarity and certainty for businesses, providing confidence to invest in new technologies and innovations.
The paper also uses a number of terms, including ‘fairness’, ‘privacy risk’ and ‘bias’ without establishing guidelines on how government expects organisations to implement against those standards. The intended meaning/application of these terms should be set out in guidance from the government, drafted in strong consultation with industry.
Woolworths Group Recommendation 1: Department to establish a guide of definitions for key
terms relating to the regulation of AI, for use across all of government and aligned to other relevant
acts (e.g. Privacy Act).
Woolworths Group Recommendation 2: Department to provide clear definitions for key terms used
in the paper such as ‘fairness’, ‘bias’ and ‘privacy risk’.
Potential gaps in approaches
Australia’s existing regulatory approaches
Woolworths response: Australia is well positioned globally through its current approach of regulating actions rather than individual technologies. However, we see the value in reviewing ancillary regulatory frameworks through a detailed gap analysis by the Office of Impact Analysis. Intellectual property regulation and proportionate liability for users and developers of AI do not appear to be keeping pace with advancing technologies.
There are a range of international bodies currently developing standards for AI and risk assessments for business applications of the technology. Federal Government support for organisations including
CSIRO’s Data61 and the Responsible AI Network (RAIN) is integral to the industry-wide uplift in responsible AI capability. We also note the important role international standards such as ISO have on informing Australian businesses practices. As a business, we participate in ISO processes, including
JTC 1/SC 42 (Artificial Intelligence) with Standards Australia IT-043. It is imperative that the government also engage in international standards and reflect them in its regulatory models to ensure
Australian regulations remain fit-for-purpose in a global context.
3
We support an enabling domestic policy environment for AI. This involves a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this end, the government should consider permitting experimentation in a controlled environment in which AI systems can be tested, and scaled-up, if appropriate. A regulatory sandbox for example is a limited form of regulatory waiver or flexibility to enable businesses to test new technologies whilst also protecting consumers through the provision of new and potentially safer AI products as a result of the sandbox.
Woolworths Group Recommendation 3: Government to establish a regulatory sandbox to test
concepts under relaxed regulatory requirements at a small scale, on a time-limited basis and with
appropriate safeguards in place to determine the feasibility of future regulations.
Coordination of AI governance across Government
Woolworths Response: Given the potential application of AI in all sectors of the Australian economy and society, it is essential that the Department consider the institutional arrangements that will help decision makers and regulators implement a coordinated approach to AI regulation across States,
Territories and the Commonwealth. We believe this would be best achieved through a policy central office or advisory on AI technology, led by the Commonwealth, with input from State and Territory regulators. Without such an approach, Australia risks a fractured policy framework across States and
Territories, placing Australian innovators at a regulatory disadvantage to their peers.
Similar to the National Cyber Security Coordinator, the appointment of a “National AI Coordinator” could centralise government interactions through a single point of contact and provide independent expert advice for all governments and the private sector. The establishment of a central organisation would facilitate the development of specialist expertise, in what are very technical domains, and would engage proactively with practitioners on a level of deeper technical understanding in the field.
This is the model currently used for industry-specific regulation in technical areas (e.g. APRA, FSANZ etc).
Woolworths Group Recommendation 4: Government to establish a central forum for advice and
policy coordination for AI regulation.
Responses suitable for Australia
Governance measures overseas
Woolworths Response: AI and Data Science is a growing export for Australian companies. A 2022
KPMG Report found Emerging Tech exports to the United States could represent a $24 billion boost to the Australian economy by 20302. Australian technology exporters will only continue to experience this growth if Australia remains an attractive market to develop AI technology. AI governance should act to support domestic innovation for Australian firms that operate in global markets.
Similar to privacy, regulations in major external markets (i.e. GDPR) can become de facto regulation in overseas markets and a mechanism by way of achieving ‘adequacy’ of regulation and standards.
2
KPMG, Accelerated US-Australia trade in Emerging Tech worth billions to Australia, 30 September
2022.
4
Australian companies, including Quantium, operate across multiple markets. Woolworths Group’s interests also extend to operations in the New Zealand market under Woolworths New Zealand
Limited. Australian regulations that unduly constrain local companies relative to those in other markets could hinder innovation in Australia and the competitiveness of Australian-developed AI solutions in overseas markets.
We support a tiered risk-based approach to regulation with risk classification rules not predetermined but distinguished into primary drivers of events. For example, the EU Regulatory framework on AI, which identifies specific applications of AI as high risk, not the technology itself. Consistent with this model – and the view expressed in our submission to the Privacy Act Review Report 2022 – we are supportive of requirements for organisations to take reasonable steps to identify high risk applications of AI and to implement measures to mitigate those risks.
Woolworths Group Recommendation 5: Government to prioritise multinational engagement and
coordination on AI on both technical aspects of AI (through standards development organisations)
and economic opportunity (by updating Australian Economic Cooperation Agreements and Free
Trade Agreements with Digital Economic Agreements containing articles on AI) to support the
growing Australian AI export sector.
Target areas
Ensuring regulation is fit for purpose
Woolworths Response: AI regulation should be as technology-neutral as possible and applied to prevent the drivers of undesirable or unintended outcomes from the improper use of AI.
Non-legislative technology-specific guidance would also be valuable in the form of best-practice guidelines on how to achieve certain risk-mitigation objectives, depending on the technology chosen.
Transparency across the AI life cycle
Woolworths response: As previously stated in our submission to the Privacy Act Review Report 2022, we support an approach to transparency that requires notification to customers about the kinds of inferences an entity may generate about an individual that constitute personal information, including via AI. An organisation should provide sufficient transparency so individuals can understand how the organisation could use personal information to make inferences. This level of transparency enables individuals to make an educated decision about whether to share their personal information with an organisation. For instance, we have a Group Privacy Centre (and a separate Privacy Centre more specific to our Everyday Rewards Program where we have a variety of internal and external partners), which offer further transparency to individuals, helping them make informed choices.
However, we believe this should not create a universal requirement for a company to disclose commercially sensitive information in relation to its ADM or AI. Such requirements should be subject to an appropriate level of oversight according to risk profile, restricted to high risk applications of AI with caveats that the disclosure of this information cannot be used to the commercial disadvantage of the IP owner. In the first instance, disclosure obligations should be made through the relevant regulator through which a query was made, rather than directly to a consumer. To reduce duplication and regulatory burden, ADM transparency measures should be directed to entities that are best placed to provide this information. For example, where a company uses a third-party service (such as credit reporting), it would be more appropriately reported by the third party providing the service. In
5
circumstances where a technology developed in-house or adapted from a third party provider, it would be more appropriately reported by the organisation operating the technology.
We would also ask that the Department give consideration to concurrent reforms to the Privacy Act
1988 when considering recommendations arising from this paper and subsequent submissions.
Woolworths Group Recommendation 6: Responsibility for providing details about AI, GenAI or ADM
should fall on the party that has developed the algorithms. This could be a third-party if the
algorithms were adopted as is, or the organisation operating it if it has been developed in-house or
adapted from a third party provider. Formal disclosures as a result of a request or complaint should
be made through the relevant regulator through which the original query was made.
Impact Assessments
Woolworths Response: As an organisation with multiple customer-facing brands, we support customer-focused regulation that fosters and allows customers to benefit from emerging innovations.
We reiterate the support we provided in our submission to the Privacy Act Review Report 2022 requiring APP entities to conduct privacy impact assessments (PIA) for high risk activities and to implement measures to mitigate those risks. Risk ratings and restrictions should be based on an assessment of activity rather than technology, method, or predefined categories, consistent with frameworks in place for the regulation of non-AI technology. Banning or restricting high risk activities on technology-based risk assessments could also diminish Australia’s expertise in certain fields of technology. This could see Australia fall behind global standards and hinder competitiveness, reducing domestic capacity to locally produce strategic applications of new technologies. Instead, the application of these technologies should be regulated through a safe and responsible-by-design framework, with PIAs appropriately identifying and mitigating risk.
Woolworths Group Recommendation 7: APP entities should be required to conduct PIAs for
activities with high risk. The determination of risk classification and any conformity requirements
should be based on the PIA assessment of activity in context, rather than technology or methods
specified in regulation.
Risk-based approaches
Woolworths Response: In the 5-Year Productivity Inquiry: Australia’s data and digital dividend report volume 4, it was stated that:
Focusing policy settings too narrowly on an individual technology or a single aspect
of data use is likely to be ineffective. Such an approach risks creating disparate
regulations that target specific problems, and this kind of piecemeal regulatory
environment could lead to additional uncertainty or costs for businesses, deterring
adoption of productivity-enhancing uses of technology and data.3
A technology-agnostic approach will assist in future-proofing regulation against technology breakthroughs. It will also enable restrictions to be based on the risk rating and risk components of a proposal.
3
5-year Productivity Inquiry: Australia’s data and digital dividend. Inquiry report - volume 4, p.90
6
As such, we support a risk-based approach takes into account the following elements of risk assessment to determine which use cases carry which level of risk:
● The probability of the risk occurring, including consideration of the actors, their level of skill
and expertise, and their likelihood of access to the system in question;
● The impact if the risk event does occur, including consideration of the financial loss, and any
harms caused to victims that may not be financial, with special attention to irreversible harms;
and
● The mitigations being proposed and how they directly modify the probabilities or impacts
determined above.
We also provide broad support to the possible elements of a draft risk-based approach as outlined in attachment C of the paper with the following caveats:
● Consistent with our Recommendation 6, transparency measures should be directed to entities
that are best placed to provide information to relevant individuals. For example, where a
company uses a third party service, that would be appropriately reported by the third party;
and
● Further industry guidance be developed on bias detection and mitigation measures.
Woolworths Group Recommendation 8: Government to consider a technology-agnostic, risk-based
regulatory framework to promote continued innovation in AI.
***
We appreciate the time and effort the government has taken to review our submission on this important matter. We look forward to working with the Department of Industry, Science and
Resources on these reforms and look forward to continued engagement in subsequent stages of the policy development process.
If you would like to discuss any aspect of this written submission, please contact Ryan Mahon,
Reputation and Public Policy Manager via rmahon1@woolworths.com.au
7