Make a submission: Published response
Published name
Upload 1
Department of Industry, Science and
Resources (DISR) Supporting responsible AI (Artificial Intelligence)
Date: 7 August 2023
1
About us
Australian Red Cross has been a critical part of Australian life since 1914, and is established by Royal
Charter of 1941 as an auxiliary to Australia’s public authorities in the humanitarian field including during emergencies and armed conflict. Australian Red Cross is one of 192 National Red Cross and Red
Crescent Societies that, together with the International Committee of the Red Cross (ICRC) and
International Federation of Red Cross and Red Crescent Societies (IFRC), make up the International Red
Cross and Red Crescent Movement (the Movement).
The Movement is guided at all times and in all places by seven Fundamental Principles: Humanity,
Impartiality, Neutrality, Independence, Voluntary Service, Unity and Universality. These principles sum up our ethics and are at the core of our mission to prevent and alleviate suffering.
Here in Australia, our core areas of expertise include Emergency Services, Migration, International
Humanitarian Law, International Programs and Community Programs. Australian Red Cross bears witness to the range of vulnerabilities diverse people and communities experience. By working alongside those impacted, Australian Red Cross understands their unique strengths and perspectives.
In 2018, Australian Red Cross established Humanitech with support from founding partner Telstra
Foundation, to build stronger and more resilient communities by using technology to increase accessibility, scale and impact of humanitarian services. Leveraging Red Cross expertise and operational reach,
Humanitech creates opportunities for technology, private sector, government and communities to work together to identify new ways to harness technology that benefits all Australians, including those experiencing vulnerability.
Australian Red Cross Humanitech brings this community centred practice into an industry research partnership with the Australian Research Council Centre of Excellence for Automated Decision-making and Society (ADM+S Centre), and through formal forums including: as a member of the UTS Human
Technology Institute’ Expert Reference Groups on Facial Recognition Technology and AI; as a co-founder of the ANU Cybernetics Practitioner Network; and as a contributor to the Australian Human Rights
Commission Human Rights and Technology project.
Overview of Australian Red Cross as of 2022:
20,000+
members and
volunteers acting
for humanity
131,000
Australians supported
during 42 emergency
activations
37,500+
people supported
through emergency
relief payments
47,000+
People received
support from 165
countries
2
1. Summary of recommendations
Overall Australian Red Cross welcomes the recognition of both the potential positive and harmful
uses of technology and the necessary protections outlined in the Department’s “Safe and
responsible AI in Australia” Discussion Paper (June 2023). However, these can be enhanced
through practical and effective approaches, to better ensure protection and safeguarding of human
dignity and against harm, especially to those most vulnerable as set out in the following
recommendations.
Australian Red Cross has been investing in the potential of AI technologies to address complex
humanitarian needs such as the impacts of climate change. Working locally within the international
Movement, we leverage global research and practice on the benefits as well as current and potential
harms of AI innovation. Australian Red Cross welcomes further opportunities to work with
Government to share these insights, and to embed civil society and community participation in AI
development and regulation.
i. Australian Red Cross recommends application of a risk and harm-reduction approach to
strengthen existing regulations by including a lens to identify potential harms and
vulnerability across the full AI life cycle.
For further detail refer to 2.1 Potential gaps in approaches: Legal reform centres around harms and
vulnerability. ii. Australian Red Cross recommends that solutions should be ‘technology neutral’ unless the
solution cannot address harms that specific technology creates.
For further detail refer to 3.1 Target Areas: Generic versus technology-specific solutions. iii. Australian Red Cross recommends that regulations promote explain-ability of AI technology
and enable end users to provide informed consent and be given choice of non-technology
options.
For further detail refer to 3.2. Target Areas: The importance of transparency across the AI lifecycle. iv. Australian Red Cross supports a human-centred approach and broad community
engagement to ensure AI developers maximise positive benefits and mitigate negative
impacts of their technology on people experiencing vulnerability.
For further detail refer to 3.3 Target Areas: Increasing public trust in AI deployment through
community engagement.
3
2. Potential gaps in approaches
2.1 Legal reform centres around harm and vulnerability
Recommendation:
i. Australian Red Cross recommends application of a risk and harm-reduction approach to
strengthen existing regulations by including a lens to identify potential harms and
vulnerability across the full AI life cycle.
1. A risk-based and harm-reduction approach to strengthen existing regulations would ensure that the
humanitarian imperative to ‘do no harm’ is embedded into every stage of the life cycle of AI1. A
harm-reduction approach requires AI developers to explicitly identify and consider immediate,
unintended, and future harms to people and communities alongside the intended impacts. This
would require regulations to apply a contextual approach to vulnerability, considering to whom the
technology may create or exacerbate experiences of vulnerability, and how the technology could
reduce vulnerability, considering specific contexts, use cases and misuse.2,3,4
2. This approach can be taken by:
a. Recognising exponential change of AI innovation and development requires flexible and
adaptable approaches. Regulation and self-assurance needs to not only respond to current
capability of AI and its harms, but consideration must also be applied to future proofing for the
likelihood of all future harms.
b. Requiring AI products to be ethical, sustainable, and safe by mandating assessments to
identify any potential harms or exacerbation of vulnerability/discrimination. These
assessments will benefit from the engagement of community members and particularly
people with relevant lived experience. Any potential harms or unintended consequences must
have a mitigation plan in place, with mechanisms that create clear accountability for
developers.
3. Australian Red Cross Humanitech Lab has evidenced this through our work with technology
startups such as AirSeed, a company using drone technology, machine learning and seed pod
biotechnology to rapidly revegetate areas of disaster damaged land.5 AirSeed have integrated
Australian Red Cross’ lived experience framework and Humanity First Principles into their self-
assurance process to maximise community benefit and avoid potential harms. Community
engagement with people impacted by flooding and landslides has enabled AirSeed to identify
environmental and psychosocial needs to address potential harms and vulnerability.
1
Sphere (2018) ‘The Sphere Handbook: Humanitarian Charter and Minimum Standards in Humanitarian Response’ Chapter: Protection
Principles, https://handbook.spherestandards.org/en/sphere/#ch004.
2
L Young and I Jurko ‘Future of Vulnerability: Humanity in the Digital Age (2022), Humanitech, Australian Red Cross p.31.
https://www.humanitech.org.au/globalassets/humanitech/pdf/red-cross-fov-combined-digital_2.pdf.
3
510 Netherlands Red Cross, ‘Using data responsibly in our daily work’. August 2017, https://www.510.global/510-data-responsibility-policy/.
4
Australian Red Cross, ‘Submission to the Human Rights and Technology project’, October 2018 unpublished.
5
Australian Red Cross, ‘AirSeed: Drone planting takes flight to promote reforestation in flood-affected NSW’, 2022, https://www.humanitech.org.au/resources/airseed/.
4
3. Target Areas
3.1 Generic versus technology-specific solutions
Recommendations: ii. Australian Red Cross recommends that solutions should be ‘technology neutral’ unless
the solution cannot address harms that specific technology creates.
4. As a member of the Human Technology Institute’s (HTI) Expert Reference Group Model Law for
Facial Recognition Technology (FRT) project, Australian Red Cross supports HTI’s position that any
laws and regulation developed should be technology neutral, unless the law cannot deal with the
level of complexity or vulnerability that some technology creates.6 Where technology neutral laws are
insufficient to address the particular vulnerabilities or identified harms to individuals or communities
that can be created by specific AI technologies, additional laws and regulations may be required to
address the level of complexity and sensitivity of specific contexts.
5. ICRC has examined the specific vulnerabilities and misuse that FRT can create in migration and
conflict contexts. For example, the Restoring Family Links (RFL) tracing program works across the
Movement to reconnect families forced to flee their homes due to conflict or crisis. ICRC together
with practitioners across the Movement developed Trace the Face, an online tracing tool using FRT.
To address risks posed by FRT, ICRC employed a ‘do no harm’ approach, designing the technology
to ensure that it does not put missing people and their families in danger. These approaches include
strict data protection protocols, decentralisation of data, human feedback loops and human support
ensuring informed consent is given.7
6. ICRC has also identified heightened risks to the community from misuse of AI-enabled digital
surveillance and monitoring and intrusion technologies including being targeted, facing ill-treatment,
having their identity stolen, being denied access to services, or suffering from psychological effects
from the fear of being under surveillance.8 Australian Red Cross supports the ICRC position that
mandated data protection impact assessments and human rights impact assessments can assist in
providing a clear framework for identifying risks, solutions and recommendations concerning data-
driven AI systems.9
7. These examples of current movement practice illustrate the importance of lawmakers identifying
vulnerabilities and potential harms in the development of new laws and regulations, along with the
involvement of community members and organisations who have expertise in identifying these
potential harms and risks.
6
N Davis L Perry and E Santow, (2022) ‘Facial Recognition Technology: Towards a model law’ Human Technology Institute – University of
Sydney, https://www.uts.edu.au/sites/default/files/2022-09/Facial%20recognition%20model%20law%20report.pdf.
7
L Young and I Jurko ‘Future of Vulnerability: Humanity in the Digital Age (2022), Humanitech, Australian Red Cross p.31.
https://www.humanitech.org.au/globalassets/humanitech/pdf/red-cross-fov-combined-digital_2.pdf.
8
ICRC, Symposium Report: Digital Risks in Situations of Armed Conflict, March 2019, p. 9, https://www.icrc.org/en/event/digital-risks- symposium.
9 A. Beduschi, “Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks” International Review of the Red
Cross (2022), 104 (919), 1149–1169, https://international-review.icrc.org/sites/default/files/reviews-pdf/2022-06/harnessing-the-potential-of- artificial-intelligence-for-humanitarian-action-919.pdf.
5
3.2 The importance of transparency across the AI life cycle
Recommendations: iii. Australian Red Cross recommends that regulations promote explain-ability of AI
technology and enable end users to provide informed consent and be given choice of non-
technology options.
8. Australian Red Cross supports the recommendation of the Australian Human Rights Commission
and the approach of the Australian Government's AI Ethics Principles that call for respect of human
rights throughout the AI life cycle. This includes being lawful, transparent, explainable, and subject to
appropriate human oversight, review, and intervention.10
9. To promote explain-ability, informed consent and choice, Australian Red Cross recommends:
a. The use of personal data for AI application and data sharing with third parties is only done
when there is fully informed consent from the individual. The individual must be aware that
their data is being collected, why, and how it will be maintained, stored, used and deleted.11
Consent can only truly be voluntary if the individual has the choice to opt in or opt out of
participation in the AI application for which their personal data is being used.
b. Requiring any application to be designed and used under the principle of “do no harm” in the
digital environment, and respect Australian's right to privacy, including as it relates to
personal data protection.12
c. Reducing the risk of discrimination against people in the community experiencing
vulnerability, with policies requiring alternative options be available and easily accessible to
all people. This ensures that no individual is disadvantaged or punished, including through
reduction of access to services, if they do not consent to providing their data or prefer to use
a non-digital channel to access assistance or services.13,14
d. To enable fully informed consent, provide people with easy-to-understand information in a
variety of formats that minimise barriers to access, including disability, lack of online
accessibility, and low literacy. The provision of alternative options that do not require
technology or digital literacy ensures that the rights of all people in the community are
protected in relation to their personal data and that consent is voluntary and informed.15
10. ICRC identify the need for greater explain-ability and informed consent in situations where people
are required to provide sensitive or personal data to access humanitarian assistance, including
during migration settings. Identified risks from collection of sensitive data to people seeking
humanitarian assistance include threats to life, integrity, dignity, and psychological or physical
security. There is also a significant risk of ‘function creep’ over time, for example using data collected
for humanitarian assistance for other purposes such as migration management, asylum claims or
10
Australian Human Rights Commission, Human Rights and Technology Final Report, (2021), file:///C:/Users/wshan/Downloads/AHRC_RightsTech_2021_Final_Report.pdf.
11
European Union, Global Data Protection Regulation (Article 35), (2016) https://eur-lex.europa.eu/legal- content/EN/TXT/PDF/?uri=CELEX:32016R0679, Article 4, p.11.
12
ICRC, Artificial intelligence and machine learning in armed conflict: A human-centred approach, Geneva, 6 June 2019 https://www.icrc.org/en/document/artificial-intelligence-and-machine-learning-armed-conflict-human-centred-approach.
13
The Trust Alliance, Submission to the Australian Government‘s Digital Identity Legislation Position Paper – Phase 2 Consultation, 13 July
2021, https://trustalliance.org.au/wp-content/uploads/2021/07/23.06.21-Digital-Identity-Legislation-Outline.docx-1-1.pdf.
14
A. Beduschi, “Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks” International Review of the
Red Cross (2022), 104 (919), 1149–1169, https://international-review.icrc.org/sites/default/files/reviews-pdf/2022-06/harnessing-the-potential-of- artificial-intelligence-for-humanitarian-action-919.pdf.
15
A. Beduschi, “Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks” International Review of the
Red Cross (2022), 104 (919), 1149–1169, https://international-review.icrc.org/sites/default/files/reviews-pdf/2022-06/harnessing-the-potential-of- artificial-intelligence-for-humanitarian-action-919.pdf.
6
identification by authorities, creating the possibility that the data will ultimately be used in ways that
individuals do not want, understand or consent to.16
3.3 Increasing public trust in AI deployment through community engagement
Recommendations: iv. Australian Red Cross supports a human-centred approach and broad community
engagement to ensure AI developers maximise positive benefits and mitigate negative
impacts of their technology on people experiencing vulnerability.
11. Red Cross Red Crescent global practice consistently demonstrates the value of a human-centred
approach in building impactful and trusted AI. 17 For example, the Maya Cares chatbot and online
resource supporting women of colour to understand, process and address racism that has been
piloted through Australian Red Cross Humanitech. It was developed by and in collaboration with
more than 250 Australian women of colour with lived experience of racism. In its first year, Maya
Cares’ product performance rates are roughly double the industry average (9 percent compared to
the industry benchmark of 2-5 percent) and user feedback indicates that the integration of lived
experience has built user trust in the product.18
12. Australian Red Cross Humanitech is also partnering with Kara Technologies to translate
emergency messaging into Auslan using ‘digital human’ avatars. The participation of the Deaf
community is critical to ensure this solution is fit for purpose, trusted and safe. The design process
is centred around accessible and inclusive community engagement and embedded into every
stage. This ensures that the technology meets the needs and does not cause harm to its users. As
a result, community confidence around the safety and value of this technology is enabling its
responsible and effective implementation, with benefits for all.19
16
ICRC,’Handbook on Data Protection in Humanitarian Action -2nd Edition, p.136, https://www.icrc.org/en/data-protection-humanitarian-action- handbook.
17
ICRC, ‘Artificial intelligence and machine learning in armed conflict: A human-centred approach’, Geneva, 6 June 2019, https://www.icrc.org/en/document/artificial-intelligence-and-machine-learning-armed-conflict-human-centred-approach.
18
Australian Red Cross, ‘Maya Cares: The chatbot supporting women of colour through racism’, 2022, https://www.humanitech.org.au/resources/maya-cares/.
19
Australian Red Cross, ‘Kara Tech: Pioneering new emergency announcement systems in sign language’, 2022, https://www.humanitech.org.au/resources/kara-tech-case-study/.
7
Contact Details
Chris Kwong
Australian Red Cross Head of Government Engagement & Strategic
Initiatives
Phone number: +61 423 211 598
Email: ckwong@redcross.org.au
8