Make a submission: Published response
Published name
Upload 1
4 August 2023
To: Technology Strategy Branch
Email: DigitalEconomy@industry.gov.au
Safe and Responsible AI in Australia: Response to discussion paper
1. KomplyAi® welcomes the opportunity to provide feedback to the Minister for Industry and
Science, the Hon Ed Husic MP, on his discussion paper Safe and Responsible AI in Australia
which aims to understand further regulatory and governance responses to maximise
artificial intelligence (AI) opportunities while protecting Australians.
2. By seeking community-wide feedback, the paper is an important step towards an effective
and proportionate regulatory approach, building on the valuable work of various regulatory
bodies including the eSafety Commissioner, the Responsible AI Network (RAIN) and the
NSW AI Committee. Please find our responses to the paper’s questions below.
3. We have included a definition section and further information about AI at the end of our
submission for explanatory guidance. We have also included a number of annexes
attached which provide a greater level of detail about global laws intersecting with AI.
ABOUT KOMPLYAI
4. KomplyAi is an Australian based, owned and operated technology start up supporting the
rapid and safe adoption of AI technologies through an innovative governance, risk and
compliance (GRC) platform, and the development of other micro tools in AI education.
KomplyAi was formally founded in 2022 in Australia to solve the global shortage of AI
compliance solutions with the aim to better democratise access to AI knowledge and tool-
age.
We are on a mission to ensure AI is always built to the highest ethical and responsible
standards.
5. We have been undertaking research into AI technologies and global laws intersecting with
AI in higher risk verticals for a number of years. KomplyAi is a member of the U.S.
Massachusetts Institute of Technology’s Computer Science and AI Laboratory Start-up
Connect Plus Program, in the Machine Learning & AI category. Currently, KomplyAi has
been approved as the sole Australian technology company in the category of “AI GRC”, for
inclusion in the global Ethical AI Database alliance, and the global Catalogue of AI Tools &
Metrics by Organisation for Economic Co-Operation & Development (‘OECD.AI’).
6. Our risk assessment tools, and compliance documents were developed with AI
experts from around the globe, amongst others, Australia’s Gradient Institute, a
recognised leader in responsible AI, and an inaugural member of the Australian
Government’s Responsible AI Network.
7. Currently, we understand that we are the only Australian technology company in GRC
providing an enterprise level platform solely targeting AI technologies.
8. KomplyAi aims to contribute to this process for feedback by supporting with our
expertise in AI and GRC and our ‘front line’ experience in developing technologies in
this field. We are uniquely positioned to support the Government’s investigation of
these issues, and any ongoing piloting exercises.
9. KomplyAi’s founder, Kristen Migliorini, has also been a deep technology lawyer and
intellectual property litigator, for over two decades, engaged in risk and compliance
activities, and technology development, in a number of senior inhouse legal roles, including
for a top 4 bank and G08 university. Previously, Migliorini was involved with DFAT and
the Department of Defence in the development of dual use technology and sanctions
legislative drafting, and higher education, sector wide compliance tool-age. She
further undertook sponsored sector research about whether or not Australia would be put
in a disadvantageous position vis-à-vis global counterparts such as the U.S., in areas such
as quantum computing, as it related to the proposed legislative reforms on controls of
intangible transfers of dual use technologies. With a particular focus on the treatment of
fundamental research and development and statutory exclusions to those activities.
10. Migliorini’s experience in deep technologies, her engagement as part of extensive
industry piloting for earlier proposals for the Defence Trade Controls laws, and the
permitting system and controls, that were advocated for, and resulted, provide some
useful experience here. Some analogous positions to the debate on safe and responsible
AI. Particularly the intersections with the use of AI in dual use technologies, potential
nefarious uses of this technology, and intangible transfers of those technologies
across international borders1. Managing risk in an increasingly globalised world.
1Kristen Migliorini, ‘Artificial Intelligence (Ai)—An Australian Perspective’ (2021) LVI(No. 3) les Nouvelles - Journal of the Licensing Executives Society.
© 2023 KomplyAi Pty Ltd. 2
EXECUTIVE SUMMARY
Our position on how Australia should tackle the challenges of AI has become more nuanced since we first wrote about these policy issues under the previous Government’s AI framework.
That’s for two main reasons: (1) the advent of more sophisticated foundation models being deployed on market, and the downstream flooding of generative AI tools being deployed globally, agnostically, and on mass; and (2) forming deeper experience in developing AI compliance technologies and better understanding the downstream impact of impending global laws on organisations. We have greater technical insights to some of the common pitfalls, and ‘front line’ experience, and visibility about how businesses (of varying sizes and sectors) are dealing with developing and using AI, including as their activities intersect with an impending legal regime, such as the
European Union AI Act.
Our original policy focus was on the importance of global interoperability and parity with overseas countries because of the extraterritorial application of laws in AI from some regions such as the European Union. However, as we more closely consider global laws, Australia’s unique position vis-à-vis AI, we believe that there is a different way; a way that represents coherence with responsible AI, and global harmonisation efforts largely in respect to alignment on prohibited AI activities.
Australia has the opportunity to do things differently here and come out on top. At the same time, ensure that it is not a testing ground for AI that has serious public consequences that will be difficult to wind back.
Of note, we recommend that Australia take a different approach to narrower use case determinations of higher risk activities intersecting with AI, and introduce a “technology passport,” an accreditation or licensing regime. This would be attached to prescribed organisations’ developing, procuring, deploying, or exporting AI. Into the future, other emerging forms of technology, can be anticipated under this regime.
We believe that this could better enable technology fluidity, neutrality, and create fewer barriers that unnaturally confines AI, and its multiple layers and intersections with other emerging forms of technology, to sectors, activity types, and a level of proscription, that should rather be connected to fundamental baseline corporate governance requirements, that better address issues of public safety.
The focus is not on the level of risk that the technology may create based on ever changing use cases, but creating a system for companies to facilitate a competitive safety advantage that would better foster customer up-take. Risk classification systems and prescribed high risk determination based on use cases may create artificial and inherent consumer distrust, and subsequent business hesitancy to innovate in those use cases.
Government intervention could rather form as an incentive for good corporate governance in data privacy, cyber security, risk and quality management etc. and better promote safe innovation and trust.
In our view, Australia does not have the administrative resources currently to facilitate a legislative regime in AI matching the scale and pace of changes to the technology
© 2023 KomplyAi Pty Ltd. 3
landscape that would be required by a product based approach. A regime that is ill equipped to deal with these changes at pace, and to curb market behaviours that present real harm, is not good for anyone. Traditional forms of technology and supporting
Government and agency administrative infrastructure are diametrically different to AI and what we are seeing in market now. Arguably, the legal assessment of whether an organisation has met its licensing requirements or conditions largely based on baseline requirements of good corporate governance may also better facilitate Australia’s immediate resourcing impediments.
There would, however, be clear licensing exclusions for certain categories, such as start- ups and SMEs, that meet specific safety criterion, and are not otherwise involved in prohibited AI activities. In our experience, start-ups and SMEs that are not well funded or positioned with in house technical capabilities across a wide breadth of areas (cyber, data privacy, responsible AI, IP), will not be in a position to meet some of the more robust compliance obligations under the EU AI Act.
A start-up or SME company could still choose whether or not it voluntarily complies with the aforementioned regime (much like privacy laws). Where it does, they could be provided Government funding support, and further economic incentives such as priority co-development opportunities in Government procurement and access to those contracts, or other expediated programs that provide a tangible economic incentive to good, safety behaviours. However, this will also be largely driven by the insurance companies, and their requirements to support indemnification of start-ups and SMEs in AI, under their insurance policies’ requirements.
Australia should also closely consider some of the exemptions that are being finalised in the
European Union for two key areas: (1) research and development activities that are non- prejudicial to commercial activities; and (2) the treatment of collaborative development of open source AI components and placing them in open repositories’2. The complexities that are being explored in R&D, open source and the treatment of the open source ecosystem, including as it intersects with the build and testing of foundation models, should be a key area of focus for the Government.
We want to ensure that Australia is not put in a disadvantageous position in this respect, and create further competition barriers for smaller players. We have some existing useful frameworks for management of these areas in Australia.
We also support legislative intervention in the form of mandatory and public reporting of AI harm. Commencing, the central aggregation of this data, to better enable preventive data analysis that will lead to fewer societal harms.
We advocate for these changes to be in a principal piece of AI legislation. We also implore the
Government to take a holistic and cross departmental approach to AI, and its intersections with other forms of existing legislation that are arguably misaligned with AI and other emerging technologies, and present legal uncertainties. Amongst others, intellectual
2Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], art 2(5d).
© 2023 KomplyAi Pty Ltd. 4
property, anti-discrimination, data privacy and consumer and competition laws. Our submission focuses on the treatment of changes to IP laws.
Finally, we have an opportunity as a sovereign nation to determine how we shape our future as it intersects with novel and emerging technologies, and we must take that responsibility seriously. There are obvious and well researched significant economic benefits that AI can generate, that we need to take advantage of3. However, there are certain AI activities where the outcomes of those activities do not have a place in Australian society. We advocate for strong and clear prohibitions on certain types of AI activities. We set those out in
Annexure C.
We explain how we could do things differently in Australia and achieve this goal.
1. ‘Front line’ observations about AI compliance impact on Australian
businesses
11. KomplyAi’s current technology supports organisations based on the most up-to-date
international frameworks for responsible AI, such as the OECD and the U.S. NIST. We
have also developed a risk rating system, and a supporting intelligent workflow engine as
part of our technology to aid organisations to comply with the requirements of the
impending draft European Union AI Act. For example, to undertake and automatically
generate the governance artefacts required under those draft laws. We have undertaken
research with responsible AI experts to determine the technical meaning of those
requirements, and the practical application of how organisations may comply with them.
12. From a practical point of view, this has provided KomplyAi with invaluable insights to those
draft laws, such as potential compliance burden, and costings, and the required level and
breadth of expertise for resourcing such adherence. Providing insight, for example, to
the much debated question of whether or not those required governance structures
and risk mitigations may in fact hinder the growth and prospering of AI innovation.
13. So far, based on our research and testing while building out our technologies, we believe
that well-funded medium, to growth, and large enterprise, companies, particularly those
with existing in house expertise, won’t be significantly impacted by changes akin to the
European AI Act, i.e., from an overall compliance costing perspective. With the
technologies that are emerging (and aided by in house capacities), many of these
organisations are well placed in GRC, and with levels of automation in governance
technologies into the future, that will create extreme efficiencies. We are already building
these. Those versed in intersecting areas of data privacy and cyber security, existing models
for quality, risk and data assessments and assurances, will likely adapt well.
3Michael Chui et al, The economic potential of generative AI: The next productivity frontier, (Report, McKinsey &
Company, 14 June 2023).
© 2023 KomplyAi Pty Ltd. 5
14. Despite AI technologies (in particular foundation models) that require a different approach
and where there are quite novel risks (some unknown), the required evaluations are not
insurmountable. Many requirements, in any event, draw upon usual and good practice for
technology builds, in conceptualisation, development and testing (e.g., technical
documentation generation, such as instructions for use, risk and quality management plans,
and data management plans etc.) and are part of a well-funded organisations’ normal risk
mitigations when developing, sourcing or using technologies. In any event, to understand
how AI systems works it is necessary to have sufficient documentation, most
importantly how the model was trained. We are of the strong view that these
assessments, in time, will and should become part of an organisations’ technology hygiene,
like data privacy and cyber security.
15. However, our initial observations are that for start-ups and SMEs, in the immediate future,
particularly those where there is no technical founder, and little in house expertise in AI,
other than a level of experimentation, and outsourcing of it, the costs involved in
engaging the human expertise for the level of evaluations required under the draft
European AI Act, are likely cost prohibitive and out-of-reach for this category.
Instructively, the draft European AI Act, has made some key allowances for start-ups and
SMEs for their compliance, and other priority access to regulatory sandboxes, and reduced
regulatory penalties. We don’t have data to make determinative observations about how
that will alleviate start-up and SME burden. It may greatly assist. Deploying and using a
system like ours, which dramatically expediates this process, and reduces costs, greatly
assist this category. However, there are likely other ways to address this category within
Australia that the Government should explore.
16. The impact on this category of start-ups and SMEs is notably further exacerbated by
our national skills shortages in these technical areas of AI in Australia and the extremely
high costs involved in engaging with specialists in this market. In our experience, AI
expert consultants, in responsible AI, and whom we have engaged with cost
anywhere between $650.00 to $2,500 AUD per hour and upwards. The engagement
of these experts and the breadth of skills required to comply for smaller business is
at this present time prohibitive and likely seriously impactful for innovators within
our country. Potentially serving as another means of market dominance for the large
players that are well placed to bear new compliance costs.
17. We believe it is imperative for Australia to think differently about how we address
these AI challenges for smaller players in our country. Notably, KomplyAi would be
happy to provide the Government with access to our platform to better understand the
breadth of expertise required in this area of evaluation and testing, and its intersection with
smaller business. Providing useful data about how to address these very real
challenges of ensuring that smaller players are not disproportionately impacted by
any AI changes in law.
2. The AI landscape has changed
© 2023 KomplyAi Pty Ltd. 6
18. The AI landscape has significantly changed since at least 30 November 2022, with the
release of OpenAI’s ChatGPT, an AI chatbot application based on the initial large language
model GPT-3.5, and the global AI race that has ensued. Today, companies, and individuals
in their millions are relying on generative AI tools to create human level text, images, videos
and audio at a scale that is unprecedented.
3. Government intervention is required
19. We strongly believe that Government inaction in AI could result in disproportionately
negative consequences for Australia. Government intervention is required to
address the tangible and intangible harm of AI. That intervention should take a more
holistic approach to maximising AI opportunities while protecting Australians. Obviously
one weapon in its intervention arsenal is the introduction of new legislation. To use a form
of legislative intervention as a means of better elevating safe innovation in Australia
is strongly advocated for here in our submission. It is required in the form of a principal
piece of legislation that governs AI. Non regulatory, ethical AI principles have been
globally ineffective, and the commentary around those principles generally
unpersuasive to the market4. We note that KomplyAi has advocated for Government
intervention in the form of principal AI legislation since the previous Coalition Government
released a paper on AI in early 2022.
20. Our research (including the curation of data repositories of relevant global laws
intersecting with AI in high risk vertical sectors) has shown there are many existing
laws that talk to AI. There are some obvious existing laws that require more immediate
attention to better facilitate AI research and innovation, and ensure safety. We cite a
number of those below, including the importance of ensuring our intellectual property
laws5 are brought in line with modern technologies and are up-to-date.
21. Generally speaking, we believe a risk based and proportionate response to AI is
appropriate. Australia should also take this as an opportunity to elevate innovation in AI
in this country, and truly prioritise its public benefits. Australia should focus on
distinguishing AI harms that have occurred, from those that we suspect may occur
(existential harms), but allowing malleability for the latter to be readily addressed6.
4. AI harm data is lacking but AI legislation could address this
4 Kristen Migliorini, ‘Artificial Intelligence (Ai)—An Australian Perspective’ (2021) LVI(No. 3) les Nouvelles - Journal of the Licensing Executives Society, 184, 188.
5 Copyright Act 1968 (Cth), Patents Act 1990 (Cth).
6 Researchers at the Center for Security and Emerging Technology report state that: “tracking efforts by AIIS and
AIAAIC suggest that the number of harms experienced in relation to AI systems has grown rapidly over the past 5-
10 years”: Mia Hoffmann and Heather Frase, 'Adding Structure to AI Harm’, Center for Security and Emerging
Technology (Issue Brief, July 2023), 6.
© 2023 KomplyAi Pty Ltd. 7
22. The form such legislation should take is a vexed issue, arguably requiring more research in
view of the fast changing nature of the AI landscape, and the tremendous advances that
have recently occurred and will continue to occur. Analyses of AI harm requires reliable
data on harm incidents. However, currently there is no Australian or global and
comprehensive public repository of such incident reporting, nor any impetus for an
organisation to interpret certain harms as AI harms and publicly report them7.
Monitoring and examining AI harms is critical to mitigating those harms, including to
provide an improved understanding of the cause of harms, and better preventative
measures. For example, better understanding emergent AI model abilities that may or may
not materialise. Legislation could be a means of obtaining such data of AI harms by
way of reasonable, mandatory disclosures and a public database that is machine
readable and readily searchable to the public. The European Union has proposed this
and so too has the Canadian Government in a particular form8.
23. In Annexure A, we have included information about the European Unions’ recently
updated risk management requirements for foundation models under the EU AI Act, and
undertaken a comparison with those voluntary actions that the large U.S. technology
companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) have
agreed to, under the recent Biden administration announcement9.
24. We have also made a further comparison with a recent study undertaken by Stanford
University’s Centre for Research on Foundation Models (June 2023) about the current status
of compliance of some of the key players in the foundation model arena in those areas
under the current draft EU AI Act10. It’s clear based on the Stanford study that further work
is required for those companies to ensure compliance under the impending laws. Based on
the study there is an absence of open information and documentation that will be required
under those laws.
5. Australia’s approach should be flexible and iterative
25. There are a number of factors clearly supporting the view that Australia’s approach to
legislating AI should be flexible and iterative. It is currently impossible to predict every
possible use case for AI, and more importantly the highest risk categorisation of AI
harm based on a use case model like the European Union11. While the European Union
has made recent changes to better accommodate foundation model risks, and ascribed
7 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], arts 51, 60, 62.
8 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], arts 51, 60.
9 White House, FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial
Intelligence Companies to Manage the Risks Posed by AI, 21 July 2023.
10 Rishi Bommasani et al, Do Foundation Model Providers Comply with the Draft EU AI Act? (Stanford Center for
Research on Foundation Models, 2023).
11 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], art 6.
© 2023 KomplyAi Pty Ltd. 8
responsibilities based on an organisation or individuals’ role in that AI value chain, we are
still in a place of learning. The intersecting harms based on the advanced level of
foundation models, with capabilities that enable them to be adapted, and integrated
at scale into countless, third party, downstream AI systems, is not fully known. The
general purpose nature of these foundation models, the financial benefits, and the ongoing
accelerated investment to make these models even more capable by large technology
companies (and their own self learning capabilities, models teaching models), means this
is not an issue of ‘hype’ that will easily go away.
26. There are many ways flexibility and iteration can be achieved by the Government to ensure
an appropriate balance is achieved with AI. Targeted interim measures addressing the
greatest immediate harms to Australians, which may mean the Government looks to
enforce the application of existing laws, and its current enforcement powers, and
provide better clarification as part of a cross Government taskforce.12 These existing
laws include data privacy, anti-discrimination, consumer protection, competition, critical
infrastructure, cyber security, intellectual property, and defence based laws13.
27. In this regard, we emphasise the importance of co-ordination between
regulatory authorities. Many key AI risks are not unique to any one industry, vendors may
offer products or services into multiple markets and cross-sectoral use cases may emerge.
Regulatory co-operation will help mitigate risks of regulatory arbitrage, duplication of rules
and unwarranted differences of approach between sectors. An inconsistent patchwork
of laws, application of laws, and enforcement action is the very worst thing that can
happen to AI innovation in this country.
28. Further, we recommend provisioning in an enabling act for an initial piloting period with
key stakeholders14. This would be followed by a formal review of the legislation
within a stated time period, or triggered by a particular future event that may be tied
to emergent societal harms.
6. Coherence, not conformity
29. We strongly agree the Government should be looking for a form of coherence not
conformity with global counterparts, based on our differences as a country.
Annexure B of this submission sets out a list of global laws and those we agree or
disagree with as they relate to constructing Australia’s AI framework. We also
indicate whether a level of coherence with those laws is required instead.
30. The sometimes uncomfortable truths as a smaller, thriving, but geographically isolated
nation, are that we are currently heavily dependent on IP imports, including as part of
12 FTC, CFPB, EEOC and DOJ, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated
Systems (April 2023).
13 Federal Trade Commission Act of 1914, Sherman Antitrust Act of 1890, Civil Rights Act of 1964, Dodd-Frank Wall
Street Reform and Consumer Protection Act of 2010, Consumer Product Safety Act of 1972.
14 Defence Trade Control Act 2012 (Cth), s74A(3).
© 2023 KomplyAi Pty Ltd. 9
extremely complicated global AI supply chains. We also have an undersupply of AI skills
and talent (particularly for unique testing and evaluation of AI systems in specialist areas),
and a clear imbalance in innovation funding (Government and venture) as compared to
countries like the U.S., and China15. We are also arguably in a new age of geopolitical
instability that should form part of any weighting of views about technologies that
can be powerfully used in that context. AI cyber weaponry, AI deployed in critical
infrastructure, and large amounts of compute power provided to unverified end users, need
to be high on the Government agenda for Australian consideration16. In this regard, we fully
support the Government in a more holistic and co-ordinated regulatory approach to its
consideration of laws intersecting with AI. These areas cannot be viewed in isolation
from one another and as siloed activities within Government Departments. Historically,
we do not believe there has been a form of generalised technology that cuts across
such a large number of Government Departments, regulators, nor existing areas of
the law, at this scale, and with this level of potential societal impact.
7. Embedding coherence into laws
31. How should coherence be practically implemented? We have undertaken research
about the status of global laws, and standards intersecting with AI, and
harmonisation efforts. Further, we have considered the important weighting of
Australia as a sovereign nation. Australia’s own cultural nuances may be incompatible or
irreconcilable with other countries’ laws requiring a different approach. For example, is the
identification of risk, variously based on models of AI risk characterisation, and prescriptive
(or more loosely framed) risk mitigations for those harms, the best way for Australia to
regulate17? Is the original European AI regulatory model, which could be said to be
15OECD Data, Australia < https://data.oecd.org/australia.htm>
16An expanded “Know Your Customer” or KYC for AI services such as that proposed by Microsoft, which builds on the same concept used in for high risk financial services, including for cloud operators where AI is being deployed for sensitive uses: Microsoft, Governing AI: A Blueprint for the Future (Report, 25 May 2023), 5-7.
17
For example, Europe has only recently implemented new rules for foundational models and generative AI systems, which are applicable regardless of whether the systems are deemed high risk or not: Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial
Intelligence Act) and amending certain Union Legislative Acts [2021], art 28b.
Brazil has proposed a risk based-framework, regulating three levels of risk for AI systems, excessive risk, high risk and non-high risk: Bill No. 2338 of 2023 (On the use of Artificial Intelligence systems) [2023]
China has announced regulations that will be implemented in 2023, governing generative AI and other AI technologies: Cybersecurity Administration of China et al, Interim Measures for the Management of Generative
Artificial Intelligence Services [July 2023]; State Internet Information Office Ministry of Industry and Information
Technology of the People's Republic of China Order No. 12 of the Ministry of Public Security of the People's Republic of China, Provisions on the Administration of Deep Synthesis of Internet Information Services [November 2022].
Canada has also proposed regulatory measures targeting high-impact AI systems: Artificial Intelligence and Data
Act 2022 (Bill C-27, Digital Charter Implementation Act 2022)
© 2023 KomplyAi Pty Ltd. 10
singularly focused on product safety mechanisms, flexible enough to accommodate
new forms of powerful emerging AI technology (or significantly more advanced
forms of existing technology)? Annexure D of this submission sets out those use cases
that are considered to be high risk under the EU AI Act, and a comparison to Canada’s Bill
C 27 where high impact AI systems are determined by way of an assessment of a number
of useful criterion.
8. Prohibited AI activities
32. Firstly, we strongly advocate for a prohibition section not unlike the EU AI Act that
focuses on the negative and reprehensible outcomes of AI on end users; members of
the public, such as children, the elderly, marginalised and more vulnerable users. We
have a strong history in Australia of legislating in this manner, and ensuring safety of end
users, and in particular more vulnerable users, in a technology neutral setting. We are a
trusted country for innovation. There are global debates about whether such prohibition
is anti-innovation and a deterrent to international investment in Australia. We don’t agree.
33. In fact, we believe prohibition will attract the best innovators to our country and
within, spurred on by a vibrant consumer base and buoying market that trusts AI
products and sees their inherent value. In the discourse of public debate, the factual
truths of these issues is sometimes lost under the weight of hype. In our strong view, the
current EU approach and the explanatory information citing example use cases, is
arguably clear in its intent to prohibit AI activities that are on the absolute highest
end of impacting users’ fundamental human rights, health and safety18. We believe
18Latest technological advances include new sensors capturing bio signals and the development of brain-computing interfaces which translate brain activity into machine readable input. These technologies are potentially highly intrusive, allowing for detection of thought or intent and possibly influencing the operation of the human brain.
For example, a Spanish supermarket chain has implemented a facial recognition system to detect people with restraining orders and to prevent them from entering the shop. The supermarket’s CCTV collects facial images of customers entering a shop, and the software creates biometric templates, which are then compared with the templates of persons that are not allowed to enter the premises. Another example is facial recognition that is used to record the working hours of employees at construction sites.
In the U.S., the National Center for Border Security and Immigration (BORDERS), a United States Department of
Homeland Security Center of Excellence has developed an Automated Virtual Agent for Truth Assessment in Real- time (AVATAR). This system conducts fully automated interviews at the border during which it analyses a travellers nonverbal and verbal behaviour, such as eye movement, gestures and pitch. The AVATAR then rates the person’s credibility and sends the result to a border control officer. In collaboration with EU’s border agency FRONTEX, the system was also tested at the airport in Bucharest. The use of these biometric deception detection systems is highly contested, as they are not based on sound science but rather on a chain of assumptions about the relationship between biometric indicators and internal intentions.
However, biometric techniques and AI are also used for controversial medical purposes. In a recent U.S. experiment, social media photos were analysed, using algorithmic facial recognition, metadata components and colour analysis to identify predictive markers of depression.
© 2023 KomplyAi Pty Ltd. 11
there are AI activities that do not currently have a place in Australian society and are a cause
of great concern. We do not need to be the testing ground for these types of
technologies that are currently and will be prohibited in other likeminded countries.
34. For example, the prohibition on subliminal techniques at Article 5(1)(a):
Subliminal techniques (covert or manipulative methods, with the objective to greatly
influence, by impairing, abilities to make informed decisions)
“The placing on the market, putting into service or use of an AI system that deploys
subliminal techniques beyond a person’s consciousness or purposefully manipulative or
deceptive techniques, with the objective to or the effect of materially distorting a
person’s or a group of persons behaviour by appreciably impairing the person’s ability
to make an informed decision, thereby causing the person to take a decision they would
not have taken or otherwise in a manner that causes or is likely to cause that person,
another person or group of persons significant harm19”.
35. In Annexure C of this submission, we set out the currently proposed wording in the EU AI
Act and we provide some examples of those activities in real life scenarios. We are of the
strong view that there isn’t a place for those prohibited AI activities in our Australian
society, and there should be a discussion about where our national lines are drawn,
for reasons of public safety. When we talk about public safety, we do not simply mean
tangible harm that is physical or observable damage, but also intangible harms that cannot
generally be observed, including those that may not be readily observable at this point in
time and in more vulnerable demographics such as our children (e.g., resulting mental or
psychological harm caused by AI over-reliance and algorithmic nudges).
9. Australia, doing things differently
36. Australia can learn from first movers overseas such as the European Union and its draft
impending laws, including the very recent changes that were required to be made to better
accommodate foundation models. This has laid an excellent and ground-breaking
foundation for the rest of the world in Responsible AI. However, the political landscape of
the European Union and its somewhat complicated legal system, including the entirety of
In the edtech sector, in China there have also been reports concerning the use of facial recognition software in
Chinese Schools that monitors the students’ behaviour and gives the teachers feedback on the students’ concentration levels.
Christiane Wenderhorst and Yannic Duller, Biometric Recognition and Behavioural Detection: Assessing the ethical aspects of biometric recognition and behavioural detection techniques with a focus on their current and future use in public spaces (Study, European Parliament, August 2021.
19
Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], art 5(1)(a).
© 2023 KomplyAi Pty Ltd. 12
the New Legislative Framework20 supporting the AI ecosystem, and the prominence of
political compromises, may not be representative of the best way for Australia.
37. Accordingly, there are a number of areas where we propose coherence, but not
conformity with impending global laws, such as the seminal European Union AI Act.
We understand that the European laws will apply extraterritorially, and enterprise in
Australia with global operations are electing to comply with the EU for administrative
efficiencies and as a more practical and immediate way of ensuring safe AI and preventing
reputational risk. However, we have also included some proposals in our submission that
would be currently unique to Australia. Constituting a level of coherence based on
achieving the same objectives in responsible AI. We have prepared a diagram below to
illustrate our proposal (Annexure G).
10. A “digital technology passport”
38. There are many reasons why a static use case approach to rating AI risk is challenging for
Australia. For example, the fast moving pace of this technology, the complicated layering
of hardware, software and open source as part of AI systems, including those that are of
general application. The increasing confluence of fields of technology (such as AI and our
heavily invested quantum computing industry), and the real challenges of multi-contributor
supply chains to components of AI systems, and allocation of liability, which Australia is
arguably at the end of. However, there are strong foundations for Australia to be a
leader in responsible AI.
20The New Legislative Framework consists of Regulation (EC) No 765/2008 of the European Parliament and of the
Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation [2008] OJ L 218, Decision No 768/2008/EC of the European
Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing
Council Decision 93/465/EEC [2008] OJ L 218 and Regulation (EU) 2019/1020 of the European Parliament and of the
Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 [2019].
Other relevant laws supporting the EU’s AI ecosystem include:
1. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single
Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L 277
2. Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on
contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU)
2020/1828 (Digital Markets Act) [2022] OJ L 265
3. Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil
liability rules to artificial intelligence (AI Liability Directive) [2022]
4. Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general
product safety, amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council
and Directive (EU) 2020/1828 of the European Parliament and the Council, and repealing
Directive 2001/95/EC of the European Parliament and of the Council and Council Directive 87/357/EEC
[2023] OJ L 135.
© 2023 KomplyAi Pty Ltd. 13
39. Organisational level accreditation, or a form of organisation specific licensing (not unlike
many already in existence in Australia) could be introduced to better promote emerging
technologies, including AI, and promote us to leading in this area.
40. We recommend creation of a digital “Technology Passport” with the ultimate aim of
promoting competitive safety and engendering a form of fluidity, technology
neutrality, and self-certification (with some key exclusions). The focus should not be
on the nature of technology, nor the characteristics of the technology, but the competitive
advantage brought by an organisation being incentivised as a trusted player in this market
for having in place safety measures in key areas to better advance its position, and that of
its end users. This will have the benefit of allowing an organisation greater scope to
operate within: (1) a number of AI activities; (2) a number of roles as part of the AI
value chain; (3) varied AI sub domains (and AI system components); (4) cross
pollination of those sub domains; and (5) potentially other forms of emerging
technology, so that Australia can move quickly to leverage those benefits.
41. A Technology Passport would act as a form of trust-mark and signal to the end user
of these technologies that there is a solid basis on which those organisations can be
trusted. We recommend the creation of a central repository of searchable details of
licence holders and their activities, instructions for use, potential harms and
mitigations, and reporting of AI harms. This will also have the effect of encouraging
a level of consistency to many areas, such as the construction and communication of
instructions for use, and transparency in explanation of AI operations and decision
making, that better serves the public interest.
42. We note not all organisations will be required to have accreditation status or an
organisation specific licence because they intersect with AI as a provider, developer,
deployer, exporter, and possibly a user. The criterion for accreditation or licensing would
be based on a more nuanced approach to the calculation of an organisations’ engagement
with high impact AI and compliance with existing laws.
43. We have set out in Annexure E an example of the types of areas that could be considered
as part of the determination of whether, for example, a licence was required or not. We
also note there are strong existing parallels globally with these types of assessment
of risk, for example, in export controls and requirements of licensing here in
Australian and overseas in the U.S.21. Organisations operating in this technology area
(and deep technologies) are very familiar with those assessments and well placed from an
infrastructure perspective to comply.
44. The accreditation or licence could be for a stated period of time, or unless and until
an event occurs or is triggered that may cause the licence to be, for example,
amended, suspended, revoked or cancelled. An example of such an event may be a
21Bureau of Industry and Security, US Department of Commerce, 'Commerce Control List' ;Bureau of Industry and
Security, US Department of Commerce,' Commerce Country Chart' (8 April 2022) ;Defence and Strategic Goods List
2021 (Cth), part 2.
© 2023 KomplyAi Pty Ltd. 14
major and preventable AI malfunction or multiple occurrences of such that result in public
distrust.
45. We also propose there be clear exclusions for: (1) particular organisations, such as start-
ups, SMEs that are not otherwise involved in the prohibited AI activities, or intersect with
particular high impact criterion. We want to prevent the occurrence of an imbalanced
system, where underfunded innovators in Australia, such as our start-ups and SMEs,
that are engaging in a few, lower risk AI activity types, are not required to comply
with an entire compliance regime based on a singular use case. However, organisations
that do not legally require a licence may also elect to apply for one and comply (much like
current privacy laws) for market leveraging.
46. Importantly, we also consider the European Union and its exclusion of stated activity
types, such as forms of non-commercial research and development22. This is an
extremely important and not often talked about exclusion, where further information is to
follow on it from the EU. However, it is certainly an area to closely follow and come to a
view about in the Australian context.
47. Arguably, the present text of the EU AI Act requires some further clarification about the
intersections and exclusions of R&D, opensource AI components, and the treatment of high
risk AI system incorporation and foundation models23. We could write another entire
submission about this area and how to treat open source and foundation models in this
context, but note for present purposes, this is an area of up-most importance to this debate.
We were heavily involved in similar extensive debates about exclusions to R&D under the
Defence Trade Controls legislation in Australia.
48. Further, there would need to be close determination of licence applicability, consequences
for noncompliance, and a potential moratorium on noncompliance for a period of time as
the licence authorisation relates to existing technologies on market.
49. Infrastructure and supporting resources around such a regime would be required by the
Government. Our established infrastructure and systems of approval in areas such as the
Therapeutic Goods Administration (‘TGA’) are largely for point in time approvals, however,
where technology is adapting and learning in situ (including generating new data), that
review process even for one form of technology in one context could involve multiple
rounds of review for significant modification, and multiple touch points, where resources
do not currently exist.
22 Europe's upcoming AI Act will not apply to research, testing and development activities regarding an AI system prior to the system being placed on the market or put into service. This is provided that these activities are conducted respecting fundamental rights and the applicable Union law. Real-world testing of the AI systems is not included in this exemption: Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act ) and amending certain Union Legislative Acts
[2021], art 2(5d).
23 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act ) and amending certain Union Legislative Acts [2021], arts 2(5d), 6,
28b.
© 2023 KomplyAi Pty Ltd. 15
50. In view of the Government’s limited resources and expertise in this area of AI evaluation,
and the potential required re-reviews of AI technologies, this could be a useful beginning
to digitise corporate compliance. We recommend focus on the responsible actions of
an organisation as it relates to their obligations for non-static technologies. This
would mean where we don’t currently have the resources or expertise at the scale required
to facilitate a technology based regime, Government resources largely become an
assessment of whether or not an organisation is meeting its required thresholds and
standards (due diligence) as opposed to technologies, and re trained or modified
technologies, or the interrogation of large algorithms and their impact.
51. It would also arguably better enable more efficient cross border flows of AI
technologies. For example, through a form of AI and blockchain certification of the AI
supply chain and multiple unknown parties contributing technologies to an AI system, and
digital exchanges of information, and the reporting of areas of concern in real time for
action to be taken in the public interest (e.g., expedited removal of technology from market
and across the globe where it presents a serious threat).
11. Reflecting an international landscape & growing calls for cross
border AI safety mechanisms
52. This Australian system could evolve as and when the international landscape does.
For example, if we hypothesised there will be the future introduction of an
International AI Treaty, an international governing body on AI, and a level of global
consensus on the highest impact areas. In these eventualities, Australia would already
be well placed from an infrastructure point of view to facilitate exchange of information
and cross border flows of its technologies as part of exporting responsible AI. This is not
unlike the global Wassenaar Arrangement, and its co-ordination and agreement of a list of
dual use and munitions technologies, and requirements, on those frontier technologies
that could do the most harm. Areas to be considered in the context of AI are
disinformation at scale and uncontrollable superhuman intelligence.
53. Mechanisms for the safe and efficient use and trade of these technologies across
international borders is an important imperative to get right.
12. Mandatory & public reporting of AI harms
54. In addition, we support pointed requirements for mandatory and public reporting of AI
harm data, such as major AI malfunctions (not unlike data breaches and reporting of cyber
incidents as it relates to critical infrastructure), and transparency about reasonably
foreseeable risks, and organisational mitigations of those risks.
© 2023 KomplyAi Pty Ltd. 16
13. Australian innovators (start-ups and SMEs) shouldn’t be unduly
impacted
55. As previously noted, ensuring start-ups and SMEs are prioritised and not
disproportionately affected by AI laws, is something that needs to be closely
considered in the Australian context. Particularly in view of the current market
dominance of large global technology companies.
56. Larger and well positioned upstream companies not only have the resource, and
infrastructure to easily comply with impending laws such as the EU AI Act, they are also in
a privileged position as the original developers of many forms of seminal AI technologies,
including foundation models, with knowledge that the public is not privy to.
57. From a safety point of view, companies should generally remain accountable for
commercialisation of technologies they place on market, particularly where they have
knowledge that those technologies, such as foundation models, will be incorporated
at scale into downstream applications in multiple contexts. Where this is reasonably
foreseeable, subject to intervening acts and the responsibilities of downstream
stakeholders, there should be sureties about the technology and its operation during the
entirety of the AI life cycle.
58. The European Union has managed the treatment of start-ups and SMEs and ensuring
innovation in a number of ways, including priority access to regulatory sandboxes, and
exclusion from more prescriptive compliance requirements. This includes a recently added
provision that basically voids the legality of contractual provisions unilaterally placed
on smaller players by larger technology providers excluding their legal liability and
squarely placing it on the smaller players24. Again, our competition experts would need
to consider the strengths and weaknesses of such a provision, and also how it compares to
existing provisions under our existing competition law relating to, for example, our unfair
terms provisioning in standard form contracts. On first view, this provision appears that it
is more targeted and could potentially put start-ups and SMEs in a stronger position.
59. These areas are explored further in Annexure F, with nomination of our preferences
for treatment.
14. Existing laws in need of updating (IP laws are completely outdated)
60. There are many existing laws intersecting with AI that require changes. For the purposes
of this submission, we won’t focus on all of them.
24Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], arts 1(ea),
28(a), 29a(4), 43(4a), 53a, 71(1).
© 2023 KomplyAi Pty Ltd. 17
61. Of key focus to us is our intellectual property legislative framework. In our strong
view, it is outdated and in need of changes to better accommodate for new forms of
technologies like AI that were not contemplated when these laws were first written.
We note that is a theme that is not unique to Australia and its IP laws. This is a theme
that is being faced by many jurisdictions globally.
62. The continued uncertainties of IP laws (in particular copyright and patent laws), which, like
in most countries around the world, are being hotly debated, are not beneficial for any side
of the fence. Many existing copyright laws (and exclusions to them such as “fair use” in the
US) are being tested through current litigation25 especially in the US. The uncertainties and
legal debates amongst academics, lawyers and AI experts needs to be better addressed.
Uncertainty in these laws facilitates single or potential multiple judge-based decisions, and
decisions that become persuasive in our jurisdictions and court cases.
63. Changes to our IP laws, however, require broader review and public consideration
and Government intervention for legislative amendment. Wherever the changes
eventually end up.
64. There are some recent examples of other jurisdictions attempting to deal with these vexed
issues, and balancing the rights of copyright owners with developers, providers and users
of AI systems requiring large amounts of trained data. The European Union, for example,
requiring certain disclosures of the copyrighted data that has been used to train AI as part
of public documentation26.
65. One area that could be usefully focused upon in Australia is a mechanism for Government
(and supporting agencies) to better support the advancement of foundation models that
are in our public interests. This could mean better facilitating a process for copyrighted
data protection practices, for example, infrastructure, standard licensing regimes and
formulaic and benchmarked fees for reproduction and use, or avenues for facilitated
and expedited digital negotiation procedures (in particular circumstances), for
copyright owners, for the reproduction of their protected copyrighted data in
foundation models. In circumstances such as alleged reproduction of that copyrighted
material by an organisation that is developing foundation models or other forms of AI, that
do not otherwise fall within an existing exemption, this would require recompense.
66. Foundation models are usually trained on billions, if not hundreds of billions, of parameters
requiring large amounts of training data and computing power. There is current research
looking at ways to reduce these requirements, and associated costs, but this is very much
the case at this present time. Foundation models when safe could be incredibly
beneficial for society and advancing and solving some of our greatest global
problems, therefore, we believe there is absolute value in creating an environment
that better enables the collation of the best data to inform these models, which takes
25 Andersen et al v. Stability AI Ltd. et al, Docket No. 3:23-cv-00201 (N.D. Cal. Jan 13, 2023); Getty Images (US) Inc v.
Stability AI Inc. 1:2023cv00135 (2023).
26 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], art 28b(4)(c).
© 2023 KomplyAi Pty Ltd. 18
into consideration issues of representation, diversity, and prevention of mass bias
and misinformation etc. There are many incumbent and well established markets for
information sharing and buying in copyright law consistent with this model.
67. Uncertain laws, litigation, disgruntled and impacted copyright owners, and
incomplete datasets, with resulting consequences such as bias perpetuated on mass,
is in no way a reasonable or stable way to continue these technological advances.
68. Data privacy, anti-discrimination and consumer and competition laws are fundamental
areas to closely review as part of broader changes here. For the purposes of this
submission, we don’t intend to interrogate these areas but are in a position to do so upon
request.
15. Quotas established by Government for “very large” foundation
model providers (benefit Australia AI ecosystem)
69. Finally, to further elevate AI innovation, we could use this opportunity for Government
intervention (regulatory or non-regulatory) to support smaller players in the Australian AI
ecosystem, or public and underfunded institutions that are currently grappling with AI
uptake and use (such as public hospitals, schools and Universities requiring AI skills and
computing power). Support here could take the form of introducing positive innovation
quotas, with prescribed organisations required to meet those quotas by, for example,
providing free access to AI technologies or capabilities which better promote equality
in digital divides, both in impoverished and gender and culturally diverse settings.
16. Australia requirements (diagram)
70. See Annexure G.
CONCLUSION
71. We strongly agree Government intervention in the form of new laws in AI is required. There
are impressive models coming from overseas in how new laws in responsible AI should be
modelled, and some proposing to apply extraterritorially to Australian businesses.
International interoperability is important, but we believe that the regulatory focus should
arguably shift by moving toward coherence with responsible AI internationally, and at the
same time accommodating our unique AI needs as a country.
72. Australia could take an approach that arguably incentivises good and fair corporate
governance standards, and supporting risk mitigations undertaken by an organisation, by
way of a licensing regime. This will allow for technological fluidity, and rapid application for
© 2023 KomplyAi Pty Ltd. 19
the real life complexities of AI, and its layered, morphing, and generalised status. This
should be coupled with clearly defined prohibitions on particularly reprehensible AI
activities and outcomes, where those technologies do not have a place in Australian society.
73. We also recommend inclusion of additional means of mandatory notifications and
disclosures that are publicly available, which allows for better data about AI harms, for
prevention rather than cure.
74. This, arguably, results in trust markers that may better incite consumers in AI uptake and
support our economy, further opening the doors to both rapid global exports and
generation of more complex combinations of AI and other emerging technologies, that will
form part of advancing layers of solutions and are efficiently managed. In other words, we
believe risks should be mitigated in a flexible and adaptive manner.
End.
© 2023 KomplyAi Pty Ltd. 20
Key Defined Terms
1. AI System: "A machine-based system that is designed to operate with varying levels of
autonomy and that can, for explicit or implicit objectives, generate outputs such as
predictions, recommendations, or decisions that influence physical or virtual
environments27."
2. Foundation Model: AI model that is trained on broad DATA at scale, is designed for
generality of output, and can be adapted to a wide range of distinctive tasks28.
3. Generative AI: Foundation models used in AI systems specifically “intended to
generate, with varying levels of autonomy, content such as complex text, images, audio,
or video”29.
4. Large Language Model: “A type of artificial intelligence model that has been trained
through deep learning algorithms to recognise, generate, translate, and/or summarise
vast quantities of written human language and textual data”30.
5. Machine Learning: A branch of AI and computer science which focuses on
development of systems that are able to learn and adapt without following explicit
instructions imitating the way that humans learn, gradually improving its ACCURACY,
by using ALGORITHMS and statistical MODELS to analyse and draw inferences from
patterns in DATA”31.
6. Provider: “ A natural or legal person, public authority, agency or other body that
develops an AI SYSTEM or that has an AI SYSTEM developed and places that system on
the market or puts it into service under its own name or trademark, whether for
payment or free of charge”32.
AI Component should be defined.
27 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], art 3(1).
28 Ibid, art 3(1)(c).
29 Ibid, art 28b(4).
30 European Commission, Knowledge Centre on Interpretation (Webpage).
31 Estévez Almenzar M. et al, Glossary of Human-Centric Artificial Intelligence (JRC Science for Policy Report,
Publications Office of the European Union, 2022), 40.
32 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act ) and amending certain Union Legislative Acts [2021], art 3(2).
© 2023 KomplyAi Pty Ltd. 21
Tables of evidence for AI submission
Annexure A
Global analysis of foundation model controls (legislative and voluntary, and research analysis about current compliance adherence by major foundation model providers)
Foundation Model Draft EU AI Act Voluntary Measures Agreed by U.S. tech Stanford University Research grading
Harms Mitigation (legal companies (comparative to the EU AI Act)34 Foundation Model Providers of the
requirements, related Current EU AI Act (10 major foundation
Articles)33 model providers (and their flagship
models) for the 12 AI Act requirements
on a scale from 0 (worst) to 4 (best))35
Potential biased Continuous risk On a broad scale, companies will invest in On a 40-point scale for risk mitigation,
outcomes and assessment and risk cybersecurity and implement the NIST AI Risk the average score for the top 10
increased susceptibility mitigation (Article 28b Management Framework. foundation model providers was 16.
to adversarial attacks (2)(a))
The companies will also work toward sharing
information among companies and
governments regarding trust and safety risks,
33 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts [2021], art 28b.
34 White House, FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI, 21
July 2023.
35 Rishi Bommasani et al, Do Foundation Model Providers Comply with the Draft EU AI Act? (Stanford Center for Research on Foundation Models, 2023).
dangerous or emergent capabilities, and
attempts to circumvent safeguards.
Harm of inaccurate or Using quality datasets The companies commit to prioritising On a 40-point scale for using quality data
misleading output (Article 28b (2)(b)) research on the societal risks that AI systems sources, the average score for the top 10
can pose, including on avoiding harmful bias foundation model providers was 22.
and discrimination, and protecting privacy.
Violation of legal Compliance by design, Some companies such as Microsoft will On a 40-point scale for design and
safeguards and harm capabilities and ensure a layered safety-by-design approach development with appropriate levels of
to users’ rights limitations (Article 28b so that models remain safe, secure, and within performance, the average score for the
(2)(c)) human control. They will also support the top 10 foundation model providers was
development of a licensing regime to 27.
regulate the secure development and
deployment.
Biases, errors, or Extensive testing and The companies commit to internal and On a 40-point scale for carrying out
vulnerabilities in the evaluation (Article 28b external security testing of all AI systems testing, the average score for the top 10
model leading to (2)(c)) before their release. foundation model providers was 10.
unexpected or
unreliable outputs.
The companies will also share information
across the industry and with governments,
civil society, and academia on managing AI
risks.
Increased resource Standards and The companies aim to develop and deploy On a 40-point scale for incorporating
waste and energy environmental impact advanced AI systems to help address society’s existing standards to reduce energy use,
consumption (Article 28b (2)(d)) greatest challenges, emphasising their the average score for the top 10
positive impact rather than exacerbating foundation model providers was 15.
issues.
© 2023 KomplyAi Pty Ltd. 23
Misuse of the model's Downstream technical Companies will develop more initiatives to On a 40-point scale for providing
capabilities and documentation and support downstream providers in technical documentation, the average
limitations instructions for use understanding the model and its users. score for the top 10 foundation model
(Article 28b (2)(e)) providers was 24.
The companies also commit to developing
robust technical mechanisms to ensure that
users know when content is AI generated,
such as a watermarking system.
The companies will enable members of the
public to lodge reports on their AI systems’
capabilities, limitations, and areas of
appropriate and inappropriate use.
Inconsistent or Quality management The companies commit to internal and On a 40-point scale for incorporating a
unreliable performance system (Article 28b external red-teaming of models or systems in quality management system, the average
of the model (2)(f)) areas including misuse, societal risks, and score for the top 10 foundation model
national security concerns, such as bio, cyber, providers was 15.
and other safety areas.
The companies will also facilitate third-party
discovery and reporting of vulnerabilities in
their AI systems.
Unauthorised use or Disclosure of We understand that there is no current On a 40-point scale for copyright
dissemination of copyrighted data commitment to disclose the use of disclosure, the average score for the top
copyrighted content (Article 28b (4)(c)) copyrighted data. 10 foundation model providers was 7.
© 2023 KomplyAi Pty Ltd. 24
Decreased Registration (Article The companies commit to publishing reports On a 40-point scale for disclosing the
transparency and 28b (2)(g)) for all new significant model public releases foundation model on the market, the
oversight on the within scope. The reports will indicate model average score for the top 10 foundation
model's deployment or system capabilities, limitations, and model providers was 9.
and use domains of appropriate and
inappropriate use, and include discussions on
societal risks, such as effects on fairness and
bias.
However, there is no clear commitment to
public disclosure of serious incidents or major
malfunctions on a public register.
The 10 major foundation model providers considered as part of this study are:
1. OpenAi – Gpt-4
2. Cohere – Cohere Command
3. StabilityAi – Stable Diffusion v 2
4. Anthropic – Claude
5. Google – PaLM2
6. Meta – LlaMA
7. BigScience – BLOOM
8. AI21labs – Jurassic-2
9. Aleph Alpha – Luminous
10. EleutherAI – GPT-NeoX
The research was conducted by the Stanford Center for Research on Foundation Models (CRFM), Institute for Human-Centered Artificial
Intelligence and released in 2023. There are some cited limitations to the research that can be found here.
© 2023 KomplyAi Pty Ltd. 25
Annexure B
KomplyAi’s KomplyAi’s KomplyAi’s AI Reference (region/country + law)
position position position
Yes No Coherence
Prohibited AI X Europe36
practices
• Artificial Intelligence Act 2021, Article 5(1)(a)- AI systems that deploy
subliminal techniques or use manipulative/deceptive techniques to hinder a
user’s informed decision making. This excludes AI systems approved for
therapeutic purposes on the basis of informed user consent.
• Artificial Intelligence Act 2021, Article 5(1)(b)-AI systems that exploit
vulnerabilities of a person or group of persons based on their characteristics
to then distort user behaviour and potentially cause them significant harm.
• Artificial Intelligence Act 2021, Article 5(1)(ba)- Biometric categorisation
systems which categorise people based on their sensitive or protected
attributes. This excludes AI systems approved for therapeutic purposes on
the basis of informed user consent.
• Artificial Intelligence Act 2021, Article 5(1)(c)-AI systems used for social
scoring, classifying or evaluating people based on their social behaviour and
the social score potentially leads to detrimental or unfavourable treatment
for users.
36Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act ) and amending certain Union Legislative Acts [2021], art 28b.
© 2023 KomplyAi Pty Ltd. 26
• Artificial Intelligence Act 2021, Article 5(1)(da)- AI systems that make risk
assessments of people or profile people to assess risk of offending,
reoffending, predicting the occurrence/re-occurrence of criminal and
administrative offences.
• Artificial Intelligence Act 2021, Article 5(1)(db)- AI systems that create or
expand facial recognition databases through the untargeted scraping of
facial images from the internet or CCTV footage
• Artificial Intelligence Act 2021, Article 5(1)(dc)- AI systems to infer emotions
of people in the areas of law enforcement, border management, in
workplace and education institutions.
• Artificial Intelligence Act 2021, Article 5(1)(d)- ‘Real-time’ remote biometric
identification systems in publicly accessible spaces.
• Artificial Intelligence Act 2021, Article 5(1)(d)- AI systems analysing recorded
footage of publicly accessible spaces through ‘post’ remote biometric
identification systems, unless pre-judicially authorised and strictly necessary
for the targeted search connected to a specific serious criminal offense.
Brazil 37
• Bill nº 2.338/2023, Article 14(I)- AI systems employing subliminal techniques
that aim to or induce a person to behave in a harmful or dangerous manner
towards the health or safety or fundamental rights of people.
• Bill nº 2.338/2023, Article 14(II)- AI systems exploiting vulnerabilities of
people, such related to their age or physical or mental disability, to induce a
person to behave in a harmful or dangerous manner towards the health or
safety or fundamental rights of people.
• Bill nº 2.338/2023, Article 14(II)- AI systems used by the government to
evaluate, classify or rank people based on their social behavior or personality
37 Bill No. 2338 of 2023 (On the use of Artificial Intelligence systems) [2023].
© 2023 KomplyAi Pty Ltd. 27
traits to determine access to goods and services and public policies, in an
illegitimate or disproportionate manner.
Canada 38
• The Artificial Intelligence and Data Act (AIDA) 2022 (Bill C-27, the Digital
Charter Implementation Act, 2022) does not outrightly ban AI systems that
present an unacceptable level of risk, but it does create offences for certain
practices:
o Section 38: possessing or using personal information for the purpose
of creating an AI system if the personal information was not lawfully
obtained
o Section 39(a): knowingly (or with reckless disregard) using an AI
system that is likely to cause serious physical or psychological harm
to an individual or substantial damage to property, if such harm
occurs.
o Section 39(b): making an AI system available for use with the intent
to defraud the public and to cause substantial economic loss to an
individual if such loss occurs.
US
• California: The California Fair Employment and Housing Council, on March
15, 2022, published draft modifications to its employment anti-
discrimination laws that would make it unlawful for an employer or covered
entity to use automated-decision systems that screen applicants or
employees on the basis of a protected characteristic (subject to some
38 Artificial Intelligence and Data Act 2022 (BILL C-27, Digital Charter Implementation Act 2022), 44th Parliament, 1st session.
© 2023 KomplyAi Pty Ltd. 28
exceptions). Assembly Bill 331 aims to prohibit the use by deployers of
automated decision tools that contributes to algorithmic discrimination.39
• Colorado: Bill Colo. Rev. Stat. § 10-3-1104.9 prohibits insurers from using
external consumer data and information sources, as well as any algorithms
or predictive models that use ECDIS, in a way that unfairly discriminates
based on race, color, national or ethnic origin, religion, sex, sexual
orientation, disability, gender identity or gender expression.
• Maryland: H.B. 1202 (2023) bans the use of “a facial recognition service for
the purpose of creating a facial template during an applicant’s interview for
employment,” unless the applicant signs a waiver.
• New York City, Local Law 2021/144 (2021) to amend the administrative code
of the city of New York, in relation to automated employment decision tools:
Employers and employment agencies are prohibited from using automated
employment decision tools unless they has been subjected to a “bias audit”
within the last year and the results of the most recent bias audit and the
“distribution date of the tool” have been made publicly available on the
employer’s or employment agency’s website.
China40
• Interim Measures for the Management of Generative Artificial Intelligence
Services 2023 (proposed), Article 4- Content generated by generative
artificial intelligence must not contain subversion of state power, overthrow
39 California Assembly Bill 331, Automated decision tools (2023)
40 Cybersecurity Administration of China et al, Interim Measures for the Management of Generative Artificial Intelligence Services [July 2023]; State Internet Information Office
Ministry of Industry and Information Technology of the People's Republic of China Order No. 12 of the Ministry of Public Security of the People's Republic of China, Provisions on the Administration of Deep Synthesis of Internet Information Services [November 2022].
© 2023 KomplyAi Pty Ltd. 29
of the socialist system, incitement to split the country, undermine national
unity, promote terrorism, extremism, and promote ethnic hatred and ethnic
discrimination, violence, obscene and pornographic information, false
information, and content that may disrupt economic and social order.
• Regulations on the Administration of Deep Synthesis of Internet Information
Services 2022, Article 6- No company or person is permitted to use ‘deep
synthesis’ technologies to create, duplicate, share, or spread information
that is forbidden by laws or administrative rules. This includes the generation
and dissemination of fake news. These services are not permitted to be used
for activities that risk national security, harm the nation's reputation, infringe
on public interests, or disrupt the economy. Any activities that break laws,
administrative regulations, disrupt social order, or infringe on others' rights
are strictly prohibited.
o Article 23- Deep synthesis technology is defined as “technology that
uses deep learning, virtual reality, and other synthesis algorithms to
generate text, images, audio, video, and virtual scenes.”
Risk based & X Europe
proportionate
response and • Artificial Intelligence Act 2021, Article 6(1)- AI systems will be considered
ensuring high risk if they are safety components or products covered by Union
innovation in AI legislation and if they are required to undergo a third-party conformity
(as a stated assessment related to risks for health and safety.
objective) • Artificial Intelligence Act 2021, Article 6(2)-AI systems will be considered
high risk if they fall under one or more of the critical areas and use cases
referred to in Annex III, only if they pose a significant risk of harm to the
health, safety or fundamental rights of people. AI systems used in managing
or operating critical infrastructure will be high-risk if they pose a significant
risk of harm to the environment.
© 2023 KomplyAi Pty Ltd. 30
• Artificial Intelligence Act 2021, Article 16-List of obligations for providers of
high risk AI systems to follow, before placing their AI systems on the market.
Brazil
• Bill nº 2.338/2023, Article 13- Providers of AI systems should conduct a
preliminary assessment to classify the degree of risk carried by their systems
as ‘Excessive’ or ‘High’.
• Bill nº 2.338/2023, Article 17- The "high risk" category identifies 14 unique
AI uses, including systems for social aid, employment decisions, critical
infrastructure operation like water and electricity, biometric identification,
and autonomous vehicles.
• Bill nº 2.338/2023, Articles 19 and 20- Providers of High risk AI systems will
have to adopt governance measures, follow documentation requirements,
register the AI systems for evaluation, carry out rigorous testing, establish
data management and risk control measures and implement human
oversight.
UK
• The UK currently has a risk-based approach to regulate AI. The Government
also indicated in June 2023 that it will adopt a pro-innovation approach that
encourages regulation on a sectoral basis. More details to come. 41
Canada 42
• AIDA, Section 7- A person who is responsible for an artificial intelligence
system must assess whether it is a high-impact system.
41 Department for Science, Innovation and Technology and Office for Artificial Intelligence, AI regulation: a pro-innovation approach (Policy Paper, 29 March 2023).
42 Artificial Intelligence and Data Act 2022 (BILL C-27, Digital Charter Implementation Act 2022), 44th Parliament, 1st session.
© 2023 KomplyAi Pty Ltd. 31
• AIDA, Section 8- Providers of high-impact systems must
establish measures to identify, assess and mitigate the risks of harm or
biased output that could result from the use of the system.
• AIDA, Section 9- Providers of high-impact systems must establish measures
to monitor compliance with the risk mitigation measures they implement.
• AIDA, Sections 11(1) and (2)- Providers of high-impact systems or those
who manage their operation, must publish on a publicly available website a
plain-language description of the system.
• AIDA, Section 12- Providers of high-impact systems must notify the Minister
if the use of the system results or is likely to result in material harm.
USA
• The U.S. federal government's AI risk management is generally not directly
risk-based. While the 2019 executive order (EO 13859) and subsequent OMB
guidance suggest a risk-based approach, other initiatives like the AI Bill of
Rights (non-binding) don't strictly follow this type of framework for AI
regulation.43
Risk scoring X Australia
system, such as
“High, • The government is currently considering a framework where AI systems are
Medium, classified as low risk, medium risk and high risk.
Limited or Europe
Low”
• The European Parliament has adopted a risk-based framework of classifying
AI Systems as prohibited, high risk, limited risk and low risk.
43Executive Office of the President, 'Executive Order 13859: Maintaining American Leadership in Artificial Intelligence' (11 February 2019); White House, 'Blueprint for an AI Bill of Rights' (October 2022).
© 2023 KomplyAi Pty Ltd. 32
Brazil
• Bill nº 2.338/2023 proposes three levels of risk for AI systems, which are
similar to the European Union AI Act: (i) excessive risk, in which the use is
prohibited; (ii) high risk; and (iii) non-high risk. Before deploying or using the
AI system, it shall pass a preliminary self-assessment analysis conducted by
the AI provider to classify its risk level.
UK
• The UK currently does not have a risk-based approach to regulate AI. The
Government indicated in June 2023 that it will adopt a pro-innovation
approach that encourages regulation within the industry.44
Canada
• While Canada doesn't follow a strict risk-based framework like the EU's AI
Act, the AIDA does outline requirements for 'high impact' AI systems, along
with other less impactful systems.
USA45
• The U.S. federal government's AI risk management is generally not directly
risk-based. While the 2019 executive order (EO 13859) and subsequent OMB
guidance suggest a risk-based approach, other initiatives like the AI Bill of
Rights (non-binding) don't strictly follow this type of framework for AI
regulation.
44 Department for Science, Innovation and Technology and Office for Artificial Intelligence, AI regulation: a pro-innovation approach (Policy Paper, 29 March 2023).
45 Executive Office of the President, 'Executive Order 13859: Maintaining American Leadership in Artificial Intelligence' (11 February 2019); White House, 'Blueprint for an AI Bill of Rights' (October 2022).
© 2023 KomplyAi Pty Ltd. 33
Narrow use X Europe
cases
established for •Artificial Intelligence Act 2021, Annex III- AI systems falling under each
high risk critical use case will automatically be considered high risk (paraphrase
characterisation below):
and compliance o Biometric and biometrics-based systems
requirements o Systems used in the management and operation of critical
infrastructure
o Systems used in education and vocational training
o Systems used in employment, workers management and access to
self-employment
o Systems used to determine access to and enjoyment of essential
private services and public services and benefits
o Systems used in law enforcement
o Systems used in migration, asylum and border control
management
o Systems used in administration of justice and democratic processes
More X Canada
generalised
criterion to • The Government considers the following to be among the key factors to be
determine high examined in determining which AI systems would be considered to be high-
impact AI impact46:
systems and o Evidence of risks of harm to health and safety, or a risk of adverse
compliance impact on human rights, based on both the intended purpose and
requirements potential unintended consequences;
o The severity of potential harms;
o The scale of use;
46 Government of Canada, The Artificial Intelligence and Data Act (AIDA) – Companion document (2023).
© 2023 KomplyAi Pty Ltd. 34
o The nature of harms or adverse impacts that have already taken
place;
o The extent to which for practical or legal reasons it is not reasonably
possible to opt-out from that system;
o Imbalances of economic or social circumstances, or age of impacted
persons; and
o The degree to which the risks are adequately regulated under
another law.
Specific X Europe
requirements
for foundation Foundation models
models and • Artificial Intelligence Act 2021, Article 28b- Providers of foundation models,
compliance need to, prior to making them available on the market, ensure that they are
requirements compliant with the following requirements (paraphrase below):
based on the o Continuous Risk Assessment and Risk Mitigation
organisations’ o Using Quality Datasets
role in the AI o Compliance-by-Design
supply chain o Standards and Environmental Impact
o Technical Documentation
o Overall Quality Management
o Registration
• The requirements apply regardless of whether foundation models are
provided as standalone models or embedded in another AI system, or
provided under free and open source licences, as a service, as well as other
distribution channels.
© 2023 KomplyAi Pty Ltd. 35
AI supply chain
• Artificial Intelligence Act 2021, Article 28- Any distributors, importers,
deployers or other third parties involved in the AI value chain will be
classified as AI system providers if they:
o Brand a high-risk AI system already on the market with their name
or trademark, or
o Substantially modify an existing high-risk AI system, keeping it high-
risk, or
o Substantially modify a general purpose AI system, making it high-
risk if it wasn't before.
• The original provider must supply the new provider with the AI system's
technical documents and other necessary information based on the
acknowledged state of the art, aiding them in fulfilling their AI Act
obligations.
Specific X Europe
requirements
for generative • Artificial Intelligence Act 2021, Article 28b- Providers of foundation models
AI and that involve generative AI, need to, prior to making them available on the
compliance market, ensure that they are compliant with the following:
requirements o Transparency obligations
based on the o Extensive testing to ensure adequate safeguards against the
organisations’ generation of content in line with Union law
role in the AI o Documentation and publication of detailed summaries of the data
supply chain used to train the generative AI system, where such training data is
protected under copyright law.
Proportionate X Europe
treatment for
start-ups and
© 2023 KomplyAi Pty Ltd. 36
SMEs & • Artificial Intelligence Act 2021, Article 28(a): Unfair or anti-competitive
competition contractual terms unilaterally imposed on an SME or startups will not be
protection binding.
measures for • Artificial Intelligence Act 2021, Articles 1(ea) and 53a(2)(c): The AI Act will
smaller introduce measures to foster innovation, emphasizing support for SMEs and
innovators start-ups. It provides that European Member States will establish regulatory
sandboxes and measures to lessen regulatory burdens on SMEs and start-
ups. Access to all AI regulatory sandboxes is intended to be free of charge
for SMEs and startups.
• Artificial Intelligence Act 2021, Article 53a(3): Sandbox participants,
especially SMEs and start-ups, will receive pre-deployment guidance on AI
Act requirements, assistance with standardization and certification, and
access to Digital Single Market resources like Testing Facilities, Digital Hubs,
and Centres of Excellence.
• Artificial Intelligence Act 2021, Article 43(4)(a): Third-party conformity
assessment fees will be tailored for SMEs according to their size and market
share.
• Artificial Intelligence Act 2021, Article29(a)(4): Start-ups and SMEs with high
risk AI systems will not be required to carry out extensive consultations with
different stakeholders when performing a fundamental rights impact
assessment.
• Artificial Intelligence Act 2021, Article 11(1): SMEs and start-ups can create
alternative documentation fulfilling the objectives of the technical
documentation requirements.
© 2023 KomplyAi Pty Ltd. 37
Brazil example
• The government provides federal initiatives to assist startups, such as
business mentoring, financial investment guidance, business modeling,
infrastructure, and training support.47
General X Europe
requirements
for • Artificial Intelligence Act 2021, Article 4a- All operators which fall under the
transparency, AI Act are encouraged to adopt the following general principles:
explainability o ‘human agency and oversight’
etc., applying o ‘technical robustness and safety’
to lower and o ‘privacy and data governance’
limited risk AI o ‘transparency’
o ‘diversity, non-discrimination and fairness’
o ‘social and environmental well-being’
• The Act translates the general principles into specific requirements for
providers and operators of high-risk AI systems.
Central AI X Europe
governing
body • Artificial Intelligence Act 2021, Article 56- Established an independent body
called the ‘European Artificial Intelligence Office’ (AI Office) with the seat
located in Brussels.
• Artificial Intelligence Act 2021, Article 56b- The AI Office is responsible for
undertaking wide a range of tasks such as supporting the implementation
of the AI Act, monitoring its effective application, promote AI literacy, serve
as a mediator in discussions about the AI Act’s application, coordinate joint
47 Government of Brazil, Startup Point (Website)
© 2023 KomplyAi Pty Ltd. 38
investigations, contribute to the effective cooperation with the competent
authorities of third countries and with international organisations etc.
Canada
• AIDA, Section 33(1)- Artificial Intelligence and Data Commissioner is
established to assist in the enforcement of the proposed law. In addition to
administration and enforcement of the Act, the Commissioner's work will
include supporting and coordinating with other regulators to ensure
consistent regulatory capacity across different contexts, as well as tracking
and studying of potential systemic effects of AI systems in order to inform
administrative and policy decisions.
UK
• The Office for Artificial Intelligence, a unit within the Department for Science,
Innovation and Technology, is responsible for overseeing implementation of
the National AI Strategy.
USA
• The National Artificial Intelligence Initiative Office, located in the White
House Office of Science and Technology Policy (OSTP), is legislated by the
National Artificial Intelligence Initiative Act (DIVISION E, TITLE LI, SEC. 5102)
to coordinate and support the National AI Initiative.
Changes to X Europe
laws behind the
introduction of • AI Liability Directive 2022/0303 (proposed)
an AI principal
piece of • Cybersecurity Regulation 2022/0085 (proposed)
legislation
© 2023 KomplyAi Pty Ltd. 39
(general • Cyber Resilience Act 2022/0272 (proposed)
updates to
data privacy, • Cyber Solidarity Act 2023/01099 (proposed)
anti-
discrimination, • Data Act 2022/0047 (proposed)
intellectual
property, • Data Governance Act 2022/868
consumer
protection, • European Health Data Space 2022/0140 (proposed)
competition
laws, product • Regulation on data collection for short-term rental 2022/0358 (proposed)
safety, cyber
security, and • Harmonisation of GDPR enforcement 2023/0202 (proposed)
export
controls) • Interoperable Europe Act 2022/0379 (proposed)
• Copyright Directive 2019/790
• Design Directive 2022/0392 (proposed)
• Standard essential patents 2023/0133 (proposed)
• Compulsory licensing of patents 2023/0129 (proposed)
• General Product Safety Regulation 2023/988
• Digital Services Act 2022/2065
© 2023 KomplyAi Pty Ltd. 40
• Digital Market Act 2022/1925
• Digital Operational Resilience Act 2022/2554
• Crypto-assets Regulation 2023/1114
• Financial Data Access Regulation 2023/0205 (proposed)
• Payment Services Regulation 2023/0210 (proposed)
Australia
• Privacy Act 1988 (Cth)
• Online Safety Act 2021 (Cth)
• Patents Act 1990 (Cth)
• Copyright Act 1968 (Cth)
• Data Availability and Transparency Act 2021 (Cth)
• Treasury Laws Amendment (Consumer Data Right) Act 2019 (Cth)
• Disability Discrimination Act 1992 (Cth)
• Racial Discrimination Act 1975 (Cth)
© 2023 KomplyAi Pty Ltd. 41
• Sex Discrimination Act 1984 (Cth)
• Age Discrimination Act 2004 (Cth)
• Competition and Consumer Act 2010 (Cth)
• Privacy and Personal Information Protection Act 1998 (NSW)
• Health Records and Information Privacy Act 2002 (NSW)
• Government Information (Public Access) Act 2009 (NSW)
• Data Sharing (Government Sector) Act 2015 (NSW)
• State Records Act 1998 (NSW)
• Anti-Discrimination Act 1977 (NSW)
• Workplace Surveillance Act 2005 (NSW)
• Surveillance Devices Act 2007 (NSW)
• Health Services Act 1997 (NSW)
• Fair Trading Act 1987 (NSW)
UK
• Data Protection Act 2018
© 2023 KomplyAi Pty Ltd. 42
• UK GDPR
• Data Protection & Digital Information (No.2) Bill
• Consumer Rights Act 2015
• The Computer Misuse Act 1990
Singapore
• Personal Data Protection Act 2012
• Customs Act 1960
• Regulation of Imports and Exports Act 1995
• Public Sector (Governance) Act 2018
• Registered Designs Act 2000
• Model Artificial Intelligence Governance Framework 2020
• The Principles to promote Fairness, Ethics, Accountability and
Transparency 2018
USA
• AI Bill of Rights 2022
© 2023 KomplyAi Pty Ltd. 43
• Executive Order 13960 Promoting the Use of Trustworthy Artificial
Intelligence in the Federal Government
• Executive Order 13859 Maintaining American Leadership in Artificial
Intelligence
• Executive Order 13985 on Further Advancing Racial Equity and Support for
Underserved Communities Through the Federal Government
• National AI R&D Strategic Plan 2023
• Bill for the Data Care Act 2022
• John S. McCain National Defense Authorization Act for Fiscal Year 2019
• National AI Initiative Act of 2020
• Advancing American AI Act of 2021 (proposed)
• AI Training Act of 2021
• Fair Credit Reporting Act of 1970
• Equal Credit Opportunity Act of 1974
© 2023 KomplyAi Pty Ltd. 44
Canada
• Consumer Privacy Protection Act (proposed)
• Artificial Intelligence and Data Act 2022 (Bill C-27, the Digital Charter
Implementation Act, 2022)
• Canada Consumer Product Safety Act ( SC 2010, c. 21)
• Privacy Act ( RSC , 1985, c. P-21)
• Personal Information Protection and Electronic Documents Act (S.C. 2000,
c. 5)
• Critical Cyber Systems Protection Act 2022 (proposed)
Brazil
• Bill of Law No. 21, of 2020 (Establishes foundations, principles, and
guidelines for artificial intelligence development and application in Brazil
and establishes other provisions)
• Bill No. 5051, of 2019 (Establishes the principles for the use of Artificial
Intelligence in Brazil)
• Bill No. 872 of 2021 (Provides for the ethical frameworks and guidelines that
underlie the development and use of Artificial Intelligence in Brazil)
• General Data Protection Law
© 2023 KomplyAi Pty Ltd. 45
• Decree No. 9,573/2018 (National Policy for the Security of Critical
Infrastructures)
• Decree No. 11,200/2022 (National Plan for the Security of Critical
Infrastructures within the Federal Public Administration)
• Decree No. 9,637/2018 (National Information Security Policy)
• Decree No. 10,222/2020 (National Cybersecurity Strategy)
• Brazilian Civil Code - Law No. 10,406/02
• Brazilian Consumer Protection Code (CDC) - Law No. 8,078/90
• Internet Legal Framework - Law No. 12,965/14
• Brazilian Criminal Code - as amended by Law No. 12,737/12
• Interception of Telephone Communication Law - Federal Law 9,296/96
• Complementary Law No. 105/01
• Brazilian Information Access Law - Federal Law Nº 12,527/11
• Good Payer's Registry Law - Federal Law Nº 12,414/11, amended by
Complementary Law No. 166/2019
• Brazilian Securities and Exchange Commission Resolution ("CVM") No. 35 of
2021
© 2023 KomplyAi Pty Ltd. 46
Annexure C
Global approach to prohibited AI activities
Prohibited AI Activities Government Explanatory Memoranda 48 Real Life Examples49 Region/country
laws
AI systems that employ subliminal AI systems designed to significantly Using a device emitting sound at an European Union:
techniques, manipulative, or distort human behaviour, potentially inaudible frequency to lessen fatigue Artificial
deceptive methods to significantly leading to physical or psychological in truck drivers, enabling them to drive Intelligence Act
distort an individual's or group’s harm, are to be prohibited. These for extended periods. 2021
behaviour by hindering their ability to systems could use imperceptible
make informed decisions and thus, subliminal techniques or exploit
potentially causing significant harm. individual vulnerabilities to cause such
distortions, potentially resulting in
significant harm over time. This applies
The ban on AI systems using
even if the provider or deployer didn't
subliminal techniques does not apply
intend to cause significant harm, as
to those used for approved
long as the harm results from
therapeutic purposes, provided
manipulative or exploitative AI
specific informed consent is obtained
practices.
from exposed individuals or their
legal guardians.
AI systems that exploit vulnerabilities AI systems may exploit vulnerabilities Cambridge Analytica, a political European Union:
of a person or a group, including related to individual traits like consulting firm, used Facebook data Artificial
48 Proposal for a Regulation of the European Parliament and of the Council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act ) and amending certain Union Legislative Acts [2021], recitals.
49 Christiane Wenderhorst and Yannic Duller, Biometric Recognition and Behavioural Detection: Assessing the ethical aspects of biometric recognition and behavioural detection techniques with a focus on their current and future use in public spaces (Study, European Parliament, August 2021.
© 2023 KomplyAi Pty Ltd. 47
characteristics of such individual’s or personality, age, or health to to build profiles of millions of users, Intelligence Act
group’s known or predicted significantly alter behaviour, potentially inferring personality traits from their 2021
personality traits or social or causing substantial harm over time to online activities. The firm allegedly
economic situation, age, physical or individuals, others, or larger groups. used this data to deliver customised
mental ability, with the objective or to political ads during the 2016 U.S.
the effect of materially distorting the Presidential Election and Brexit
behaviour of that person or a person referendum, with claims of influencing
pertaining to that group in a manner voters. This case triggered widespread
that causes or is likely to cause that discussions on data privacy, AI ethics,
person or another person significant and the risks of manipulation.
harm.
Biometric categorisation systems that AI systems that categorise individuals Companies are increasingly employing European Union:
categorise natural persons according based on sensitive or protected AI tools to evaluate job applications, Artificial
to sensitive or protected attributes or characteristics, such as gender, race, assessing candidates based on various Intelligence Act
characteristics or based on the political orientation, or religion, can be criteria like qualifications and skills. 2021
inference of those attributes or highly intrusive, violate human dignity, However, if the data used to train AI
characteristics. and risk causing discrimination. systems contains biases, such as
gender or racial bias, the tools might
undervalue applicants from certain
The prohibition of such systems does
groups. For example, if an AI tool was
not apply to AI systems intended to
trained on data from a tech industry
be used for approved therapeutical
traditionally dominated by men, it
purposes on the basis of specific
might unfairly downgrade applications
informed consent of the individuals
from women, reinforcing existing
that are exposed to them or, where
gender imbalances.
applicable, of their legal guardian.
© 2023 KomplyAi Pty Ltd. 48
AI systems that are used for the social AI systems that provide social scoring China's Social Credit System assigns a European Union:
scoring, evaluation or classification of can lead to discriminatory results and score to citizens based on their Artificial
people or groups based on their exclusion of certain groups, infringing financial behaviour, legal compliance, Intelligence Act
social behaviour and that social score on rights to dignity and non- and social interactions, impacting 2021
leads to detrimental or unfavourable discrimination. These systems evaluate various aspects of their lives, such as
treatment. individuals based on a multitude of loan eligibility or travel rights.
data points and time occurrences
related to their social behaviour,
potentially resulting in disproportionate
or unjust treatment in unrelated
contexts.
AI systems used for assessing the risk AI systems used by law enforcement for Predictive policing software used by European Union:
of individuals or groups committing predictions or risk assessments, based U.S. law enforcement, that operates on Artificial
or recommitting crimes, based on on profiling or past behaviour data, past crime data to forecast future Intelligence Act
profiling or assessment of personality pose a risk of discrimination. These crime hotspots and guide police 2021
traits, locations, or past criminal systems can violate human dignity and resource allocation. However, such
behaviour. the principle of presumption of systems have sparked concerns due to
innocence and particularly target its potential to perpetuate existing
certain marginalised individuals or policing patterns, resulting in the
groups. over-policing of communities of
colour.
AI systems that create The widespread and untargeted Clearview AI, a company uses facial European Union:
or expand facial recognition collection of biometric data from social recognition technology to scrape Artificial
databases through the untargeted media or CCTV footage to build facial images on the web to create a Intelligence Act
scraping of facial images from the recognition databases fosters a sense of searchable biometric database. A user 2021
internet or CCTV footage mass surveillance and can result in can upload a snapshot of any person,
severe infringements of fundamental and in response the system generates
rights, including privacy violations. additional matches from the internet
using biometric comparison. It was
© 2023 KomplyAi Pty Ltd. 49
found that global law enforcement
agencies were using Clearview AI,
without sufficient oversight or
disclosure. This revelation triggered a
wave of lawsuits, with some focusing
on the potential harm that Clearview's
technology could inflict on survivors
of domestic and sexual violence,
undocumented immigrants, and
marginalised communities. These
groups unknowingly found themselves
subjected to Clearview's 'face
printing', potentially facing severe
repercussions from the company's
wide-ranging surveillance activities.
Another example is of supermarket
chains in Spain that have introduced a
facial recognition system to identify
and prevent individuals with
restraining orders from entering their
stores. The system uses the store's
CCTV to capture facial images of
incoming customers. These images are
then converted into biometric
templates, which are cross-referenced
with the templates of individuals
© 2023 KomplyAi Pty Ltd. 50
prohibited from entering the
premises.
AI systems to infer emotions of a There are serious doubts about the AI systems are being used to suggest European Union:
natural person in the areas of law scientific validity of AI systems designed activities that align with a learner's Artificial
enforcement, border management, in to detect emotions or physical features cognitive abilities and provide real- Intelligence Act
workplace and education institutions. such as facial expressions or voice. time feedback, eliminating the need 2021
These technologies often face issues of for a human teacher. These systems
reliability and specificity, as emotions depend on analysing student
vary significantly among individuals and performance data collected via IoT
cultures. The use of such systems, devices. However, there are concerns
particularly in contexts like law about practices like the use of facial
enforcement or education, can lead to recognition in Chinese schools to
misuse due to reliability issues. monitor student behaviour and
Therefore, deploying AI systems concentration levels. Such collection
intended to detect emotional states in and analysis of learning activities can
these contexts is prohibited. seriously intrude on children's privacy,
as the data can reveal detailed insights
about a child's development, mental
state, preferences, and weaknesses.
The use of ‘real-time’ and ‘post’ The use of AI systems for 'real-time' Law enforcement agencies around the European Union:
remote biometric identification remote biometric identification in world use real-time and remote Artificial
systems in publicly accessible spaces. public spaces can infringe on personal biometric identification technology to Intelligence Act
rights and freedoms, create a sense of locate suspects in public spaces. 2021
constant surveillance, and dissuade Incorrect predictions could not only
people from exercising their result in discriminatory outcomes but
fundamental rights. Technical can also threaten individual autonomy
inaccuracies can lead to biased, and self-determination, as the
potentially discriminatory results, constant fear of being evaluated and
especially regarding age, ethnicity, sex, monitored can alter public behaviours.
© 2023 KomplyAi Pty Ltd. 51
or disabilities. These 'real-time' systems The broad effects that mass
are prohibited due to their immediate surveillance can have on a
impact and limited opportunities for population's behaviour are also
checks or corrections. strikingly evident when observing
China's social credit score system,
which penalises its citizens for non-
Similarly, AI systems used for post-
adherence to societal norms.
analysis of recorded public space
footage are also prohibited, except
when pre-judicial authorization is
obtained for law enforcement in the
context of a specific serious criminal
offense.
© 2023 KomplyAi Pty Ltd. 52
Annexure D
Global approach to a risk-based metric for controlling AI harms: high risk classifications
Use case Sector High Risk Region/country laws
Classification
HIGH RISK
Biometric identification of natural persons Biometric and biometrics- See Figures 1- European Union:
based systems 3 below Artificial Intelligence
Act 2021, annex III
AI systems intended to be used to make inferences about personal Biometric and biometrics-
characteristics of natural persons on the basis of biometric or based systems
biometrics-based data, including emotion recognition systems
AI systems intended to be used as safety components in the Management and
management and operation of road, rail and air traffic operation of critical
infrastructure
AI systems intended to be used as safety components in the Management and
management and operation of the supply of water, gas, heating, operation of critical
electricity and critical digital infrastructure infrastructure
AI systems intended to be used for the purpose of determining Education and vocational
access or materially influence decisions on admission or assigning training
natural persons to educational and vocational training institutions
AI systems intended to be used for the purpose of assessing students Education and vocational
in educational and vocational training institutions and for assessing training
participants in tests commonly required for admission to those
institutions
© 2023 KomplyAi Pty Ltd. 53
AI systems intended to be used for the purpose of assessing the Education and vocational
appropriate level of education for an individual and materially training
influencing the level of education and vocational training that
individual will receive or will be able to access
AI systems intended to be used for monitoring and detecting Education and vocational
prohibited behaviour of students during tests in the context of/within training
education and vocational training institutions
AI systems intended to be used for recruitment or selection of natural Employment, workers
persons, notably for placing targeted job advertisements, screening management and access
or filtering applications, evaluating candidates in the course of to self-employment
interviews or tests
AI systems intended to be used to make or materially influence Employment, workers
decisions affecting the initiation, promotion and termination of work- management and access
related contractual relationships, task allocation based on individual to self-employment
behaviour or personal traits or characteristics, or for monitoring and
evaluating performance and behavior of persons in such relationships
AI systems intended to be used by or on behalf of public authorities Access to and enjoyment
to evaluate the eligibility of natural persons for public assistance of essential private
benefits and services, including healthcare services and essential services and public
services, including but not limited to housing, electricity, services and benefits
heating/cooling and internet, as well as to grant, reduce, revoke,
increase or reclaim such benefits and services
AI systems intended to be used to evaluate the creditworthiness of Access to and enjoyment
natural persons or establish their credit score, with the exception of AI of essential private
systems used for the purpose of detecting financial fraud services and public
services and benefits
AI systems intended to be used for making decisions or materially Access to and enjoyment
influencing decisions on the eligibility of natural persons for health of essential private
and life insurance
© 2023 KomplyAi Pty Ltd. 54
services and public
services and benefits
AI systems intended to evaluate and classify emergency calls by Access to and enjoyment
natural persons or to be used to dispatch, or to establish priority in of essential private
the dispatching of emergency first response services, including by services and public
police and law enforcement, firefighters and medical aid, as well as of services and benefits
emergency healthcare patient triage systems
AI systems intended to be used by or on behalf of law enforcement Law enforcement
authorities, or by Union agencies, offices or bodies in support of law
enforcement authorities as polygraphs and similar tools ; insofar as
their use is permitted under relevant Union and national law
AI systems intended to be used by or on behalf of law enforcement Law enforcement
authorities, or by Union agencies, offices or bodies in support of law
enforcement authorities to evaluate of the reliability of evidence in
the course of investigation or prosecution of criminal offences
AI systems intended to be used by or on behalf of law enforcement Law enforcement
authorities or by Union agencies, offices or bodies in support of law
enforcement authorities for profiling of natural persons in the course
of detection, investigation or prosecution of criminal offences
AI systems intended to be used by or on behalf of law enforcement Law enforcement
authorities or by Union agencies, offices or bodies in support of law
enforcement authorities for crime analytics regarding natural persons,
allowing law enforcement authorities to search complex related and
unrelated large data sets available in different data sources or in
different data formats in order to identify unknown patterns or
discover hidden relationships in the data
AI systems intended to be used by or on behalf of competent public Migration, asylum and
authorities or by Union agencies, offices or bodies as polygraphs and border control
management
© 2023 KomplyAi Pty Ltd. 55
similar tools insofar as their use is permitted under relevant Union or
national law
AI systems intended to be used by or on behalf of competent public Migration, asylum and
authorities or by Union agencies, offices or bodies to assess a risk, border control
including a security risk, a risk of irregular immigration, or a health management
risk, posed by a natural person who intends to enter or has entered
into the territory of a Member State
AI systems intended to be used by or on behalf of competent public Migration, asylum and
authorities or by Union agencies, offices or bodies for the verification border control
of the authenticity of travel documents and supporting management
documentation of natural persons and detect non-authentic
documents by checking their security features
AI systems intended to be used by or on behalf of competent public Migration, asylum and
authorities or by Union agencies, offices or bodies to assist border control
competent public authorities for the examination and assessment of management
the veracity of evidence in relation to applications for asylum, visa
and residence permits and associated complaints with regard to the
eligibility of the natural persons applying for a status
AI systems intended to be used by or on behalf of competent public Migration, asylum and
authorities or by Union agencies, offices or bodies in migration, border control
asylum and border control management to monitor, surveil or management
process data in the context of border management activities, for the
purpose of detecting, recognising or identifying natural persons
AI systems intended to be used by or on behalf of competent public Migration, asylum and
authorities or by Union agencies, offices or bodies in migration, border control
asylum and border control management for the forecasting or management
prediction of trends related to migration movement and border
crossing
© 2023 KomplyAi Pty Ltd. 56
AI systems intended to be used by a judicial authority ot Administration of justice
administrative body or on their behalf to assist a judicial authority or and democratic processes
administrative body in researching and interpreting facts and the law
and in applying the law to a concrete set of facts or used in a similar
way in alternative dispute resolution
AI systems intended to be used for influencing the outcome of an Administration of justice
election or referendum or the voting behaviour of natural persons in and democratic processes
the exercise of their vote in elections or referenda This does not
include AI systems whose output natural persons are not directly
exposed to, such as tools used to organise, optimise and structure
political campaigns from an administrative and logistic point of view
AI systems intended to be used by social media platforms that have Administration of justice
been designated as very large online platforms within the meaning of and democratic processes
Article 33 of the Digital Services Act (EU) 2022/2065, in their
recommender systems to recommend to the recipient of the service
user-generated content available on the platform
HIGH IMPACT
See Figures 4- AIDA Canada
5 below
© 2023 KomplyAi Pty Ltd. 57
Figure 1
© 2023 KomplyAi Pty Ltd. 58
Figure 2
© 2023 KomplyAi Pty Ltd. 59
Figure 3
© 2023 KomplyAi Pty Ltd. 60
Figure 4
© 2023 KomplyAi Pty Ltd. 61
Figure 5
© 2023 KomplyAi Pty Ltd. 62
Annexure E
Classification System for Organisational Based Licensing
Organisation Organisation Sector/(s) or Activity Technology/(ies) Data Cybersecurity End use End user Conditions for licence
size agnostic type/(s) (sub domains, privacy impact (e.g., impact impact holders.
(general) and nominated impact critical score (e.g., score (e.g.,
novel score (e.g., infrastructure) fundamental vulnerability Based on a scoring
characteristics sensitive rights profile of mechanisms for a
of AI data, impact end user) model AI classification.
technologies) health assessment, This score and its
date) accuracy could become
more intelligent over
time once we have
aggregated data about
“AI harms” that can
better facilitate our
understanding of where
we need targeted
intervention in the form
of licensing controls.
Conditions could
involve a period for
licensing approval, and
reviews, baseline
foundational
requirements for
responsible AI & risk
management.
© 2023 KomplyAi Pty Ltd. 63
Conditions imposed, or
licence revoked for
particularly serious
breaches (or multiple
breaches), including
optionality for external
algorithmic audits.
Minister may respond
and impose conditions
on a licence holder
where there is a serious
risk of imminent harm
or to prevent serious
harm.
Example of U.S. & Australian laws where there is a classification system for licensing and technology export activities
Export Administration Regulations (EAR) (doc.gov) & Licences (AUSGELs) | Business & Industry | Defence
© 2023 KomplyAi Pty Ltd. 64
© 2023 KomplyAi Pty Ltd. 65
© 2023 KomplyAi Pty Ltd. 66
Annexure F
Treatment of start-ups and SMEs under global AI laws
Type of Support Region/country laws
The AI Act will introduce measures to foster innovation, emphasizing support for SMEs and start-ups. It European Union: Artificial
provides that European Member States will establish regulatory sandboxes and measures to lessen Intelligence Act 2021, Articles
regulatory burdens on SMEs and start-ups. Access to all AI regulatory sandboxes is intended to be free of 1(ea) and 53a(2)(c)
charge for SMEs and startups.
Prospective providers in the sandboxes, in particular SMEs and start-ups, will have access to pre-
deployment services such as guidance on the implementation of the AI Act requirements, to other value- European Union: Artificial
adding services such as help with standardisation Intelligence Act 2021, Article
documents and certification and consultation, and to other Digital Single Market initiatives such as Testing 53a(3)
& Experimentation Facilities, Digital Hubs, Centres of Excellence, and EU benchmarking capabilities.
Third-party conformity assessment fees will be tailored for SMEs according to their size and market share. European Union: Artificial
Intelligence Act 2021, Article
43(4)(a)
Start-ups and SMEs with high risk AI systems will not be required to carry out extensive consultations with European Union: Artificial
different stakeholders when performing a fundamental rights impact assessment. Intelligence Act 2021,
Article29(a)(4)
SMEs and start-ups can create alternative documentation fulfilling the objectives of the technical European Union: Artificial
documentation requirements. Intelligence Act 2021, Article
11(1)
Competent bodies may establish the conditions, requirements, Brazil: Bill nº 2.338/2023, Article
32
© 2023 KomplyAi Pty Ltd. 67
communication and disclosure channels for micro or small companies/startups that are providers and
operators of AI systems.
Competent bodies may authorise the implementation of an experimental regulatory environment for Brazil: Bill nº 2.338/2023, Article
innovation in AI (‘regulatory sandbox’) for the entities applying for it and fulfilling the requirements 38
specified by the proposed law.
AI sandbox: The UK government will remove barriers to innovation and minimise legal and compliance UK, ‘A pro-innovation approach
risks for SMEs and businesses to help AI innovators navigate the regulatory landscape. The government to AI regulation’ (Policy Paper,
has established a multi-regulator AI sandbox that will test how the UK’s AI regulatory framework operates 2023)
and whether regulators or the government should address unnecessary barriers to innovation
© 2023 KomplyAi Pty Ltd. 68
© 2023 KomplyAi Pty Ltd. 69
Annexure G
© 2023 KomplyAi Pty Ltd. 70