Make a submission: Published response

#325
Regulatory Institute
26 Jul 2023

Published name

Regulatory Institute

Upload 1

Automated Transcription

Regulatory Institute: Safe and responsible AI in Australia

The Regulatory Institute is a non-profit think tank that aims to improve regulation globally so that regulations benefit us all. We do this through research into good lawmaking and regulatory techniques, and pro bono consultancy to governments, legislatures and NGOs.
We published the Model Law on Artificial Intelligence (the Model Law), which forms the basis of our comments in this consultation by the Australian Department of Industry,
Science and Resources.

We have also referenced our Handbook "How to regulate?", which covers theoretical, methodological, and applied aspects of legal regulation and lawmaking, concentrating on the best practices of more than 40 countries/jurisdictions around the world. Noting the importance of conformity infrastructure for regulating AI we urge you read the two chapters that cover this topic.

The Model Law provides a relatively complete basic pattern for the development of laws or regulation for the control of AI systems. The Model Law does not contain detailed technical provisions, which facilitates its use in all jurisdictions regardless of their resources or capacities. It should be used as a toolbox, a checklist or the basis for the development of an adapted law, and optimised as such. The Model Law is not intended to be used exactly as it is drafted. It points to important decisions to be taken by regulatory practitioners without preempting respective choices. Provisions in the Model Law often present choices, be they alternatives or add-on modules, that can be kept or deleted.

Given the importance of AI regulation, the Regulatory Institute is pleased to present its response to the call for consultation laid down in the June 2023 “Safe and responsible AI in Australia” Discussion paper (the Paper).

Consultation questions

Definitions
1. Do you agree with the definitions in this discussion paper? If not, what definitions do you prefer and why?

The definitions in the Paper look solid and self-sufficient taking into account the aim and scope of the Paper. However, legal regulation, for the sake of both flexibility and formality, is likely to need a more elaborated system of definitions.

The Model Law provides a system of definitions with some optional elements (Section 2).
A part of those definitions are directly related to the AI itself. While the others (eg. “users”,
“clients”, “traders”) are related to the patterns of legal regulation embodied in the Model
Law.

The Model Law’s var. 2 of AI definition may be used, in particular, to develop a definition encompassing non-AI ADMs, which can be useful if one decides to regulate those in the same way as AI as mentioned on page 5 of the Paper.

Another of the Model Law’s legal techniques worthy of attention is explicitly establishing a list of products and/or products’ features which qualifies the product’s inclusion as an AI
(or as a matter falling under AI regulation) even if it does not meet all the criteria of the general definition of AI.

www.howtoregulate.org 1
Potential gaps in approaches
2. What potential risks from AI are not covered by Australia’s existing regulatory approaches? Do you have suggestions for possible regulatory action to mitigate these risks?

Some claim that there is a risk of AI systems taking over control over humankind. The
Regulatory Institute has no competence to assess whether such risk really exists.
However, given that some experts assert such a risk, it would be preferable to address it, following the precautionary principle. If AI systems with such potential were not to be banned, they would merit continuous surveillance by a supervising authority, including a state agent installed at the place of business of the respective operator and permanently having access to all documents. Such special agents have been established in the USA.
system for the control of certain companies which have massively infringed the law. They operate also outside the USA.

We also recommend that AI-related environmental (ecological) challenges be considered.
The Model Law suggests utilising potential environmental impact in the risk-management system (Section 3) and obliging developers and operators to assess potential environmental (ecological) risks (Section 9).

There are a range of smaller risks covered in the Model Law, but not yet covered by
Australia’s existing regulatory approaches; see our final remarks at the end and the Model
Law. See also our recently published Model Law on cross-border internet activities and virtual worlds which contains additional aspects, particularly the interaction between different actors.

3. Are there any further non-regulatory initiatives the Australian Government could implement to support responsible AI practices in Australia? Please describe these and their benefits or impacts.

In the Handbook "How to regulate?", there is a wide range of implementation measures that do not necessarily need a legislative power, but are still beneficial as non-regulatory initiatives. See in particular:
- Chapter 4, specifically Section 4.2.2 about regulatory measures other than
regulation;
- Chapter 12, particularly Sections 12.4 to 12.9.

We also recommend the following implementation measures either to be laid down in the law or to be outlined via other means or channels, such as budgetary and administrative measures:
- Minimum resources and minimum control intensity requirements for supervising
authorities, as suggested in some of our model laws;
- Alert portals where competitors and employees can drop documents and
information pinpointing infringements, if so wished anonymously or with whistle
blower protection;
- AI to be used by authorities to check compliance of AI systems (“AI battling AI”);
- Playground like test environments for AI systems, either voluntarily or mandatorily
to be used;
- Online compliance test for AI systems, either voluntarily or mandatorily to be used;
- Code of conduct, merging the various codes of conduct and AI safety papers
currently popping up;
- Voluntary peer review on the application of legal requirements and the code of
conduct;

www.howtoregulate.org 2
- Voluntary private certification on the application of legal requirements and the code
of conduct;
- Voluntary state quality mark based on the state’s assessment of the application of
legal requirements and of the code of conduct;
- Administrative quality rating by the supervising authority of ai systems, in view of
various criteria relevant for users (in some jurisdictions, such rating systems require
a legal basis; see the Australia and New Zealand School of Government conference
of 31 July 2023 might give you insight into this particular topic);

4. Do you have suggestions on coordination of AI governance across government?
Please outline the goals that any coordination mechanisms could achieve and how they could influence the development and uptake of AI in Australia.

The topic of harmonised interpretation between different federal departments or state administration in charge of implementation is relevant for Australia as it is in the EU. The topic has been addressed in the previous question.

For the coordination of positions within the federal government, recommendations could only be developed through a deeper dive into the Australian government system which we are unable to do as part of these comments. Generally speaking, we observe two approaches chosen by governments:
- Several ministries or institutions operate at the same level, but one of them is in a
lead / coordinating role; where conflicts cannot be resolved, the issue is moved to
the level of the ministers, or cabinet.
- Several ministries or institutions are coordinated by an additional entity which might
even have the right to decide in case of conflict; the possibilities for ministries to
bring the issue up to the level of the ministers might be limited or not.
As AI policy might require quick decisions, the second (inter alia French) model might seem preferable.

Responses suitable for Australia
5. Are there any governance measures being taken or considered by other countries
(including any not discussed in this paper) that are relevant, adaptable and desirable for Australia?

For now, the EU AI Act seems to be the most solid and consistent regulatory response to
AI issues emerging. With that said, we are eager to share the Regulatory Institute response on the EU AI Act draft we had submitted to the EU authorities (see separate attachment). The Regulatory Institute sees important possibilities for improvement even for the EU AI Act, and this not only for the Large Language Models which were not appropriately covered by the initial proposal. We understand that in the final negotiations
(so-called Trilogue) between the European Parliament, Council and Commission, LLMs will play an important role. For this reason it is worthwhile following the development in the
EU.

Target areas
6. Should different approaches apply to public and private sector use of AI technologies? If so, how should the approaches differ?

We believe that public-private differentiation is a suitable method for regulating AI. Such an approach is common in those jurisdictions that regulate AI and specifically exclude military use of AI from the scope of general regulation.

www.howtoregulate.org 3
The other ways this approach can be implemented are:
1. restricting development and/or distribution and/or usage of certain AI technologies
to public bodies only; and
2. laying down extra requirements on AI products (technologies) that are designed to
be used or sought to be used for public administration or governance, for example
at Section 44 of the Model Law we recommend policy consideration about offering a
right to refuse the processing by AI systems.

7. How can the Australian Government further support responsible AI practices in its own agencies?

Private and public organisations alike are obliged to follow the law. In our Model Law on AI the obligations of developers, operators and users of AI are clearly outlined, including a standard for management systems and training requirements of people involved. We believe a regulation with clear requirements and obligations is a good way to support responsible AI practices. We appreciate that with the recent findings of the Royal
Commission into Robodebt that trust in responsible AI practices in public administration might be low at the moment and so we highlight the importance of good whistleblowing provisions as well as alert portals.

8. In what circumstances are generic solutions to the risks of AI most valuable? And in what circumstances are technology-specific solutions better? Please provide some examples.

The value of law and legal regulation is that legal rules tend to have a reasonable degree of generalisation and abstraction. Thus legal rules have a great potential to cover new emerging relationships without the need to amend existing regulation.

Considering that, it is rational to evaluate if generic solutions are consistent or they need to be backed or replaced with AI-specific regulations. Excessive use of technology-specific solutions could make for a complex legal system that quickly risks becoming disorganised.
For example we suggest that licensing of AI-related activities could be ruled by general licensing legislation (in terms of procedure, general framework etc); and the very licence requirements should be technology-specific.

The same goes for other traditional legal institutions that have developed decent levels of generalisation and adaptability, like property law, legal liability, consumer protection etc.
Thus we believe that using generic regulation backed with AI-specific rules if needed should be favoured where possible. Our Handbook "How to regulate?" covers inter alia the issues of generic and sector-specific regulation balance as well as other issues of regulation architecture (Chapter 2).

9. Given the importance of transparency across the AI lifecycle, please share your thoughts on:
a. where and when transparency will be most critical and valuable to mitigate
potential AI risks and to improve public trust and confidence in AI?
b. mandating transparency requirements across the private and public
sectors, including how these requirements could be implemented. a. Ensuring transparency is important and valuable at any stage of the AI product life cycle. The focus should be on identifying and strengthening the weak spots in the cycle.
We consider the period when an AI system is already in legitimate use, satisfying the regulatory requirements at the development stage and put on the market and/or according

www.howtoregulate.org 4
to legal requirements, to be such a potential weak spot. Once the basic permissions are granted, and the basic requirements are met, the regulatory influence and oversight tends to weaken. At this specific stage of the AI product lifespan further development and updates of the product could potentially fail to comply with transparency requirements.
Thus, we assume that mandating transparency requirements should be equally strong and demanding throughout the entire AI product lifespan, while paying attention to the specified period of active use. See Section 11 of the Model Law which outlines a set of circumstances where mandatory transparency should be required. b. Legal mechanisms to be used to mandate transparency requirements could include inter alia: public and civil control, self-control, periodical assessments (self-assessments included), requiring whistleblower mechanisms, alert portals providing whistle-blower protection and permitting anonymous submission of documents or information etc.

10. Do you have suggestions for: a. Whether any high-risk AI applications or technologies should be banned completely?

Our Model Law (Section 13) recommends the total ban on the development, operation and use of AI systems for the following purposes:
- [Full] societal control;
- Social scoring of individuals [trespassing a concrete context such as behaviour on a
trading platform];
- Political profiling and repression;
- Manipulation of democratic elections and political processes;
- Interrupting public services;
- Causing damage to third parties;
- Exploitation of psychological or physical weaknesses or vulnerabilities;
- Manipulation of opinions and preferences using erroneous information;
- Creating psychological dependencies;
- Steering and dissemination of internationally banned arms; and
- Generating “deep fake”.

It might be necessary to assess whether there is a risk of AI systems emerging which could take over control over mankind, in which case a ban and further precautionary measures would be needed. As stated above the Regulatory Institute has no expertise with regard to the question whether such a risk exists, but some experts claim that the precautionary principle calls for a ban and further precautionary measures. b. Criteria or requirements to identify AI applications or technologies that should be banned, and in which contexts?

We assume that it is reasonable to use the criteria established for risk management purposes and classifying AI products into risk categories.

In general, AI systems should be banned if they jeopardise democracy and state political systems, environment, human life and health, basic human rights. Naturally this should be accompanied by case-specific risk assessment to evaluate the following: probability of harm, gravity of potential harm, (ir)reversibility of consequences, potential risk mitigation, resources to be utilised to reverse consequences etc.

Considering this point 10. you may also be interested in our comments on respective
Article 5 of the EU AI Act draft which are attached.

www.howtoregulate.org 5
11. What initiatives or government action can increase public trust in AI deployment to encourage more people to use AI?

Here are some ways to encourage AI usage:
- providing sound, consistent and transparent regulation and governance in the field;
- providing open access (online included) to the information about exercising that
regulation and governance, about approved AI systems (e.g. public registers of
approved AI systems);
- encouraging (and in some cases mandating) financial insurance in the field (first of
all, financial insurance of liability for damage inflicted by AI);
- mandating transparency of AI systems.

Implications and infrastructure
12. How would banning high-risk activities (like social scoring or facial recognition technology in certain circumstances) impact Australia’s tech sector and our trade and exports with other countries?

Any bans on certain economic activities, at the first glance, depress economic activities.
But actually reasonable bans pay off. Australia, as any other jurisdiction, needs those bans to secure rule of law, law and order, human rights and freedoms, public order, democratic political system, national security. Moreover, these bans are to be one of the pillars of globalised AI regulation (the emergence of which is inevitable), and following general bans is crucial to be integrated in that globalised regulation.

13. What changes (if any) to Australian conformity infrastructure might be required to support assurance processes to mitigate against potential AI risks?

We are not familiar with Australia’s conformity infrastructure to be able to make specific recommendations here. However, our Handbook: “How to regulate?” deals with conformity in advance (Chapter 10) and after, together with enforcement (Chapter 11). These two chapters invite the regulator to consider important elements of good conformity mechanisms, be it in advance or after or through the enforcement pathway.

Risk-based approaches
14. Do you support a risk-based approach for addressing potential AI risks? If not, is there a better approach?

We support a risk-based approach. We believe though that AI systems which jeopardise the most basic values, rights and freedoms should be banned regardless of quantified risk-assessment. So a combination of risk-based approach and field-based approach (or goal-based one) is preferable.

15. What do you see as the main benefits or limitations of a risk-based approach?
How can any limitations be overcome?

The main benefit of a risk-based approach is its proportionality, fairness and reasonableness. The main limitation is that exercising a risk-based approach could be biased (intentionally or unintentionally), thus failing to provide adequate protection.

This limitation can be overcome by using other approaches alongside the risk-based one.

www.howtoregulate.org 6
16. Is a risk-based approach better suited to some sectors, AI applications or organisations than others based on organisation size, AI maturity and resources?

We believe a risk-based approach should be a basic approach accompanied by other approaches which are to assist only. We do not view organisation size, AI maturity or resources as sound indicative criteria. Furthermore we see the stage of AI “maturity” as a potential weak spot as specified above in point 9.

17. What elements should be in a risk-based approach for addressing potential AI risks? Do you support the elements presented in Attachment C?

We support the elements of a draft risk-based approach presented at Attachment C and provide the following specific comments: a. Impact assessments: our Model Law requires an impact assessment according to specific considerations under each Risk Class. We also provide guidance in the form of ethical rules, to assist the impact assessment. For example at Section 4 Ethical Rules it provides that:
AI systems shall be developed, operated and used in such a way that the following
ethical principles and rules are respected to the extent possible:
● Where several lives stand against each other, the solution saving the maximum
number of lives shall be sought for;
● The lives of all persons have the same value, in particular regardless of origin
and wealth or any of the criteria listed in the definition of “discrimination”;
● The different life expectancy of persons may / may not be taken into account /
may only be taken into account where one life stands against another life;
Section 6 concerns how to approach conflict between previous principles. b. Notices: We agree that users should be informed where automation or AI is used in ways that materially affect them. Section 44 of the Model Law concerns a right to refuse the processing by AI systems in specific areas, we recommend a listing approach. The notices outlined in the Paper are narrow and could be broadened to cover other obligations to inform, which facilitates a system of mutual control. See the Model Law’s
Section 23:
Developers shall:
● Inform operators, also in their commercial contracts, of their respective
obligations and the conditions set out in this law;
● Inform operators, also in their commercial contracts, of ethical problematic
aspects mentioned in this law, namely by referring to their own ethics code and
respective reports;
● Keep records of their commercial contacts with operators and inform authorities
upon their request; and
● Inform the supervising authority of infringements they become aware of,
regardless whether these are made by competitors, operators, users or
conformity assessment bodies. c. Human in the loop/oversight assessments: Under Section 8 of the Model Law we recommend that Risk Classes 2 and 3 be designed, manufactured and operated in a way that ensures human control of ethical principles as well as parameters and mechanisms of decision-making. This regulatory obligation, and which risk classes it would apply to, will of course depend on a policy decision of the legislators but a balance is required noting that societies fear decisions by AI systems but then full human control takes away the advantage of AI.

www.howtoregulate.org 7
d. Explanations: Explainability is an important tool for verifying compliance with obligations.
The transparency obligation under Section 11 of the Model Law provides:
AI systems shall be developed, operated and used in such a way that:
● Decision-making can be probed, understood and reviewed by authorities,
supervisory bodies, common interest third parties, operators, users and their
clients;
● Decisions are explainable [both in technical and non-technical terms], which
implies in particular that the processes that extract model parameters from
training data and generate labels from testing data can be described and
motivated;
● Inputs and outputs can be verified;
● Records of design processes, decision-making and other events with external
effects or system relevant events are established and kept;
● The persons steering the processes, decision-making or other operations can be
identified, together with the decisions they have been taken during installation or
operation of the AI system;
● Training, validation and testing datasets are accessible; and
● IT interfaces for full remote authority control (e.g. application programming
interfaces) are available and can be operated with commonly available OR
freely available software.
Section 18 requires developers and operators to establish, keep up to date and accessible technical documentation which shall, inter alia, include …an explanation of how the ethical principles and others rules set out in Sections 4 and 6 to 14 have been fine-tuned and applied. e. Training: Training is important for compliance. Section 19 of our Model Law requires that:
Developers, operators and users with clients shall train their staff (both employees
and freelancers) with regard to this law and supplementing decrees, ethics in
general and their own ethical code in particular. They shall raise awareness of risks
and impacts of the AI systems in question. They shall support their staffs’ and
freelancers’ adherence to professional organisations aiming at the identification and
tackling of issues of professional ethics and AI system ethics. f. Monitoring and documentation: We recommend both a formal monitoring system by the supervising authority particularly for the higher risk classes but also a system of mutual control (Section 23 of the Model Law). The supervising authority could be assisted for the lower risk classes by a third party conformity assessment body (Section 23 of the Model
Law).

18. How can an AI risk-based approach be incorporated into existing assessment frameworks (like privacy) or risk management processes to streamline and reduce potential duplication?

No comment.

19. How might a risk-based approach apply to general purpose AI systems, such as large language models (LLMs) or multimodal foundation models (MFMs)?
For general purpose AI systems (eg. LLMs and MFMs) we do not view a risk-based approach to be problematic. Given that a potential use case may fall under the highest category of risk, the regulated assessment should therefore follow the obligations of the highest risk category.

www.howtoregulate.org 8
20. Should a risk-based approach for responsible AI be a voluntary or self-regulation tool or be mandated through regulation? And should it apply to: a. public or private organisations or both?
b. developers or deployers or both?

A risk-based approach is a core element of AI regulation. Thus it should be mandated through regulation. It should be mandatory for both public and private organisations, developers and deployers, as such criteria are not essential when it comes to AI-related risks mitigation.

Final remarks:

There are important aspects of effective regulation not covered by any of the questions above, such as:
- AI platforms enabling the independent development of AI systems
(“meta-AI-systems”): these platforms risk to overrun all legal boundaries as the
dissemination of AI technology might become uncontrollable, fall into the hands of
rogue states or terrorists etc.; we do not know whether such platforms already exist,
but we expect them to emerge as we learnt that there are already platforms for the
creation of virtual worlds;
- Establishment of basic ethical rules to be followed by AI systems, eg. on trade-offs
between different values; see Sections 4 and 6 of our Model Law on Artificial
Intelligence;
- Data rights; see Sections 40 to 43 of our Model Law on Artificial Intelligence;
- Accident prevention obligations; see Section 5 of our Model Law on Artificial
Intelligence;
- Risk management with precise quantified values; see Section 10 of our Model Law
on Artificial Intelligence;
- Obligations on technical documentation and instructions for use; see Section 45 of
our Model Law on Artificial Intelligence;
- Prohibition of uncontrolled non-proliferation of AI systems; see Section 14 of our
Model Law on Artificial Intelligence;
- Liability and its insurance; see Sections 4 and 6 of our Model Law on Artificial
Intelligence.
- Comprehensive empowerments for authorities to act against those who infringe the
law and those contributing to infringements or steering from behind infringements,
mother and sister companies etc; see Section 48 of our Model Law on Artificial
Intelligence and Sections 41 and 45 of our Model Law on cross-border internet
activities and virtual worlds.
- Comprehensive empowerments for acting abroad / internationally, also with the help
of other states; see Section 44 of our Model Law on cross-border internet activities
and virtual worlds.
- Comprehensive empowerments for acting domestically to assist other states when
other states wish to enforce their AI law towards actors on the Australian territory ;
see Section 56 of our Model Law on Artificial Intelligence and Section 44 of our
Model Law on cross-border internet activities and virtual worlds.
- Establishment of systems in which economic actors control each other, refuse to
cooperate with infringing actors, are obliged to report to authorities on noted
infringements etc; see various Sections of both model laws.
- In case of extremely risky AI systems: obligation of ai system providers to ensure
compliance of their trade partners and clients via private law contracts obliging them
to respect the AI regulation (law enforcement via a chain of private law contractual

www.howtoregulate.org 9
obligations down to the level of the user); possibly to be complemented by an
authority licensing mechanism for contracts and business conditions in line with
Section 6.f of our Model Law on cross-border internet activities and virtual worlds.
- Comprehensive lists of obligations for all types of natural and legal persons
interacting with AI systems, of the style of the lists of obligations of Chapter 2 of our
Model Law on cross-border activities and virtual worlds.

See for further complementary aspects our Model Law on Artificial Intelligence, our Model
Law on cross-border internet activities and virtual worlds and this innovative product legislation proposal drafted by the European Commission which is partly based on our
Handbook "How to regulate?". If you wish a precise analysis of “what is missing” in the
“Safe and responsible AI in Australia” Discussion paper, please come back to us. We would be in a position to deliver such analysis in about 10 working days.

www.howtoregulate.org 10

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.

Upload 2

Automated Transcription

Regulatory Institute general comments about the AI Act:

We understand that the AI Act’s definition of AI (Article 3) and classification of high-risk AI systems (Article 6) is settled at this point but the Institute believes both (AI definition and high-risk classification) will be more cumbersome to administer and cause some AI systems to not be regulated. Noting the specific objectives of the AI Act, include:
● ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
● ensure legal certainty to facilitate investment and innovation in AI;
● enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and
● facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

We urge additional consideration for the approach taken for a narrow AI definition combined with a technology-driven listing approach. We believe this approach will need frequent and cumbersome updating, while still creating loopholes because an AI system may be placed on the market, on account of not being a product subject to third-party conformity assessment and being a technology not yet included in the Annex III high-risk list. Furthermore, the decision to regulate AI shows there was little confidence with the previous system of “self regulation” and so how confident can we be that providers will self-assess their new AI system as high-risk?

Research is outside the scope of the Act because it is not placed on the market and a desire to not stifle research (paragraph 16 of opening memoranda page 21 refers). However, research can take place in private and semi-private institutions, which could have less probity than a public institution or a private institution subject to the conditions of public funding. We recommend keeping some boundaries and not exempting research from the scope of this law.
Please refer to Section 36 of the Model Law to include AI research in the scope of the Act.

We have noticed that civil liability relations appear to be out of the AI Act’s scope. We suggest it appropriate to set at least a general framework and principles on this matter, as our Model Law suggests (Articles 45-46 refers).
Regulatory Institute specific comments, including suggestions for amending Articles in the Act:
(Note that underlined words represent suggested additions in the middle column called “Proposals”, strikethroughs are suggested deletions)

Respective provisions of AI Act Proposals Comments, including the Model Law
references

Empowerments of national competent authorities The extent to which national competent See Section 48 of the Model Law and the
are found in various Articles throughout the Act authorities operate and align resources according Regulatory Institute’s Handbook: how to regulate?
depending on their role/s: to their enforcement obligations and legislative Chapters 10 (10.5 refers) and 12 (12.5 refers).
● national supervisory authority (Article 59 empowerments highlights the positive benefits of
paragraph 2); a thorough list of empowerments, which clarifies
● notifying authority (Article 30 Notifying what the authority can and should be doing. In the
Authorities); or proposed Act various authorities’ empowerments
● market surveillance authority (Article 3 are scattered throughout, usually in the respective
paragraph 26 empowerments outlined at procedures to be followed in particular situations
Regulation (EU) 2019/1020 and Article 63) or contained in other regulations (usually such
regulations are incomplete or lack specificity). For
clarity we recommend a summarised list of
empowerments in the chapter dealing with the
respective national competent authorities. For
example Art. 59 could provide a list of minimum
empowerments a national competent authority
should have to fulfil its obligations under the Act,
including:
● Communicating warnings and
recommendations to the population
● Ordering the infringing developers,
operators and users with clients and their
media and internet service providers to
communicate warnings and
recommendations;
● Blocking or removing content from internet
websites offering AI systems or access
thereto;
● Interrupting or fully controlling telephone,
media and internet services of
continuously infringing developers,
operators or users with clients or ordering
respective service providers to do so;
● Requesting developers, operators and
users with clients to take certain steps in
order to stop an infringement or to reduce
the likelihood of further infringements;
● Recovering costs triggered by the
investigation and enforcement costs from
infringing developers, operators or users
with clients;
● Enforcing financial obligations and
financial sanctions or penalties via
confiscation of AI systems, rights, money
or other items in possession of the
infringing person;
● Obliging contractual partners of infringing
developers, operators and users with
clients to stop, limit or modify their
cooperation;
● Obliging developers, operators and users
with clients to display information on the
conformity assessment of regulated
products or services on their website;
● Requiring operators to inform users of
infringements affecting them and requiring
users with clients to inform their clients of
infringements affecting them;
● Inspecting, without notice, offices,
factories, warehouses, wholesaling
establishments, retailing establishments,
laboratories, research institutions and
other premises or vehicles in which AI
systems are produced or kept;
● Taking samples or copies of AI systems or
purchasing them, openly or covertly;
● Reverse engineer AI systems; and
● Supervising the AI system during the
course of an investigation of infringement.

The general empowerment of national competent
authorities to request information and
documentation could be strengthened if it were
made clear that such reasoned requests also
included information / documentation from
contractual partners or on contracts. This clarity
reflects the complexity of AI development chains,
so that providers cannot hide behind contractual
confidentiality agreements or that they do not
have the information as that is the purview of
another contractor, in another country, not subject
to the EU MS’s jurisidiction.

Art. 3 point (4) Art. 3 point (4) ‘user’ means any natural or legal The current version of the Regulation provides
‘user’ means any natural or legal person, public person, public authority, agency or other body substantial protection of end-users of AI systems, authority, agency or other body using an AI using an AI system under its authority. while also assigning a series of duties.
system under its authority, except where the AI But the definition in question excludes natural system is used in the course of a personal Art. 3 point (4-1) ‘professional user’ means a user persons, who use AI systems in the course of a non-professional activity; using an AI system under its authority, except personal non-professional activity, from the scope
where the AI system is used in the course of a of the Regulation. Thus such persons appear to
personal non-professional activity; lack legal protection.

Art. 3 point (4-2) ‘non-professional user’ means a The idea is to bring such persons
user using an AI system under its authority, where (‘non-professional users’) to the Regulation
the AI system is used in the course of a personal scope.
non-professional activity. It could also be reasonable to distinguish
‘professional user’ and ‘non-professional user’,
especially in the context of defining their legal
duties.

Art. 3 point (35) ‘biometric categorisation system’ Art. 3 point (35) ‘biometric categorisation system’ It is barely likely that biometric data could witness means an AI system for the purpose of assigning means an AI system for the purpose of assigning political orientation of a person.
natural persons to specific categories, such as natural persons to specific categories, such as Moreover, such an assumption could be found sex, age, hair colour, eye colour, tattoos, ethnic sex, age, hair colour, eye colour, tattoos, ethnic insulting.
origin or sexual or political orientation, on the origin or sexual or political orientation, on the basis of their biometric data; basis of their biometric data;

Art. 3 point (44) ‘serious incident’ means any Art. 3 point (44) ‘serious incident’ means any ‘Serious incident’ is a criterion that triggers incident that directly or indirectly leads, might incident that directly or indirectly leads, might procedures of reporting of serious incidents.
have led or might lead to any of the following: have led or might lead to any of the following:
AI systems, according to the fields they are being
(a) the death of a person or serious damage (a) the death of a person or serious damage used and/or will be used, are very likely not only to a person’s health, to property or the to a person’s health, to property or the to do damage to health/property/environment, but environment, environment, also to result in other natural or legal persons’
rights breach. Thus such consequences, if serious
(b) a breach of fundamental rights defined by The enough (concern fundamental rights, are mass
Charter of Fundamental Rights of the European etc.), should be considered ‘serious’ and be
Union; reported about in a way the Regulation demands.

(c) systematic, mass or serious breach of other
rights;

(b) a serious and irreversible disruption of the (d) a serious and irreversible disruption of the management and operation of critical management and operation of critical infrastructure. infrastructure.

Art. 5 paragraph 1 points (a), (b) Art. 5 paragraph 1 points (a), (b) Free will is a fundamental human right which has
to be secured through outlawing any violent
1. The following artificial intelligence 1. The following artificial intelligence influence.
practices shall be prohibited: practices shall be prohibited: Thus free will violation, but not physical or
psychological harm, should be the criterion for
(a) the placing on the market, putting into (a) the placing on the market, putting into banning AI systems that distort a person’s service or use of an AI system that deploys service or use of an AI system that deploys behaviour.
subliminal techniques beyond a person’s subliminal techniques beyond a person’s AI systems that use subliminal techniques beyond consciousness in order to materially distort a consciousness in order to materially distort a a person’s consciousness in order to materially person’s behaviour in a manner that causes or is person’s behaviour in a manner that causes or is distort a person’s behaviour should be prohibited.
likely to cause that person or another person likely to cause that person or another person Requiring a threshold of “causes or is likely to physical or psychological harm; physical or psychological harm disregards and/or cause that person or another person physical or
overcomes human will; psychological harm” is not appropriate as it
(b) the placing on the market, putting into disregards free will.
service or use of an AI system that exploits any of (b) the placing on the market, putting into the vulnerabilities of a specific group of persons service or use of an AI system that exploits any of due to their age, physical or mental disability, in the vulnerabilities of a specific group of persons order to materially distort the behaviour of a due to their age, physical or mental disability, in person pertaining to that group in a manner that order to materially distort the behaviour of a causes or is likely to cause that person or another person pertaining to that group in a manner that person physical or psychological harm; causes or is likely to cause that person or another
person physical or psychological harm disregards
and/or overcomes human will;

Art. 5 paragraph 1 point (d) Art. 5 paragraph 1 point (d) Law enforcement bodies could bypass the actual
prohibition by involving private proxy-persons to
1. The following artificial intelligence 1. The following artificial intelligence maintain ‘real-time’ remote biometric identification.
practices shall be prohibited: practices shall be prohibited: Especially since the results of such identification
are far not always used as a procedural evidence,
[...] [...] but rather as an operative information to allocate
and detain a person.
(d) the use of ‘real-time’ remote biometric (d) the use of ‘real-time’ remote biometric identification systems in publicly accessible identification systems in publicly accessible So it is reasonable to ban the use of ‘real-time’ spaces for the purpose of law enforcement, spaces for the purpose of law enforcement, remote biometric identification systems in publicly unless and in as far as such use is strictly unless and in as far as such use is strictly accessible spaces by anyone except law
necessary for one of the following objectives: necessary for one of the following objectives: enforcement authority or other public authorities
maintaining public order.
(i) the targeted search for specific potential (i) the targeted search for specific potential An exception could be made for cases when such victims of crime, including missing children; victims of crime, including missing children; identification is used to ensure that an access to
the publicly accessible space in question is not
(ii) the prevention of a specific, substantial (ii) the prevention of a specific, substantial given to persons which can not access it and imminent threat to the life or physical safety and imminent threat to the life or physical safety according to law.
of natural persons or of a terrorist attack; of natural persons or of a terrorist attack;

(iii) the detection, localisation, identification or (iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of criminal offence referred to in Article 2(2) of
Council Framework Decision 2002/584/JHA[1] Council Framework Decision 2002/584/JHA[1] and punishable in the Member State concerned and punishable in the Member State concerned by a custodial sentence or a detention order for a by a custodial sentence or a detention order for a maximum period of at least three years, as maximum period of at least three years, as determined by the law of that Member State. determined by the law of that Member State.

(c) the use of ‘real-time’ remote biometric
identification systems in publicly accessible
spaces by anyone except law enforcement
authority or other public authorities maintaining
public order pursuant to point (d). This rule shall
not apply to cases when ‘real-time’ remote
biometric identification systems are used
exceptionally to ensure that an access to the
publicly accessible space in question is not given
to persons which can not access it according to
law.

Title III High-Risk AI Systems The criteria [listed at (a)-(h)] for assessing The Risk Classes outlined in the Model Law are
Chapter 1 Classification of AI Systems as whether an AI system poses a risk of harm to the generic, not technology-linked and thus open to
High-Risk health and safety or a risk of adverse impact on future developments. They do not rely on
Article 7 fundamental rights is somewhat broad. permanent updating. For example, our Risk Class
Paragraph 2 It could be useful to add criteria around how that 3 is much broader than the Act’s approach of
risk is quantified according to ethical rules. In our referring to certain software technologies and
Model Law we list ethical rules to be considered covering “safety relevant software components for
when deciding which Risk Class an AI system products subject to a third party conformity
applies (see Section 4 of the Model Law). We also assessment procedure”. The latter is likely to
outline guidance on how to approach conflicts create loopholes because very few jurisdictions’
between ethical principles. legislation cover all technologies requiring a
“safety component”, to complete a third party
conformity assessment procedure. A few
examples of poorly (or not) regulated products
and services across the globe, despite their high
risk potential, include: satellites, geo-engineering
tools and software, navigation tools and software,
(water-) drones, health-relevant software.

Article 7 Amendments to Annex III Article 7 Amendments to Annex III We suppose it is reasonable to consider if
developing sound human control could substitute
2. When assessing for the purposes of 2. When assessing for the purposes of qualifying an AI system as high-risk.
paragraph 1 whether an AI system poses a risk of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex high-risk AI systems already referred to in Annex
III, the Commission shall take into account the III, the Commission shall take into account the following criteria: following criteria:

[(a)–(h)] [(a)–(h)]

(g) the extent to which establishing a reasonable
human control could reduce potential harm or
impact while not reducing efficiency of an AI
system and other benefits from using it.
Chapter 2 Requirements for High-Risk AI As above the risk management system is silent Var. 3
Systems about how to treat conflicts between ethical ... the following principles shall apply:
Article 9 Risk Management principles, presumably because it is up to the ● Where two or more differing interests in
provider to make an assessment. Considering the question can be fairly well protected, either
nature of high-risk AI systems it is best to give by limiting the harm, limiting the probability
some guidance about how such conflicts should of harm or a combination thereof, this fairly
be resolved. See Section 6 of the Model Law, good protection shall be sought for. If
which invites the legislator to reflect on how AI thereafter there is still margin of discretion,
systems should be construed and operated. For the interest(s) with higher value shall be
example a useful addition to Article 9 could be protected as a priority. Where two or more
either Variant 3 or Variant 4 from Section 6 of the interests have the same value, the degree
Model Law. of protection of the interests shall be
optimised so that, if the values were on the
same scale, the overall degree of
protection would be highest.
● Where the interests in question cannot be
fairly well protected, the interest(s) with
higher value shall be protected as a
priority, unless the probability of harm is
negligible. Where two or more interests
have the same value, the degree of
protection of the interests shall be
optimised so that, if the values were on the
same scale, the overall degree of
protection would be highest.

Var. 4
... the following shall apply:
The value of two or more interests shall be
multiplied with the probability of harm thereto. A
solution shall be sought where the sum of
products of two or more interests multiplied with
their respective probability of harm is minimal.

Article 12 Record-keeping We have noticed that the Regulation provides no
minimum logging requirements for AI systems
[1.–3.] other than “AI systems intended to be used for the
‘real-time’ and ‘post’ remote biometric
4. For high-risk AI systems referred to in identification of natural persons”.
paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum: We suppose there should be general minimum
requirements and/or requirements for other AI
(a) recording of the period of each use of the systems. Or at least there should be an system (start date and time and end date and empowerment for The Commission to adopt time of each use); delegated acts on this matter.

(b) the reference database against which On the left, we provide some relevant proposals.
input data has been checked by the system;

(c) the input data for which the search has led to a match;

(d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14 (5). 5. For high-risk self-learning AI systems the
logging of self-learning shall be maintained. The
logging shall provide, at a minimum:

(a) the input data used for self-learning;

(b) the used algorithms of the input data
interpretation;

(c) the results of self-learning.
6. Where a decision and/or proposal of decision is the outcome of an AI system, the logging shall cover information comprehensively sufficient for further human manual review of the decision/proposal with no need to refer to the AI system itself. The logging shall provide, at a minimum:

(a) the input data;

(b)the reference database, if such present;

(c) the algorithms that could had been used;

(d) the algorithms that actually had been used;

(e) output data (decision and/or proposal);

(f) comprehensive mechanism of how the input data resulted into the output data.

7. For all high-risk AI systems, including those mentioned in paragraphs 4–6 above, the logging shall provide, at a minimum:

(a) log-in information (user, date, time, authentication type);

(b) the input data;

(c) the output data.

8. The Commission is empowered to adopt delegated acts in accordance with Article 73
to define more minimum logging requirements for
AI systems or their certain types.

[respective amendments to Article 73 should be
done to set the framework of this empowerment
as Article 73 does to other Commission
empowerments that are present in the Regulation]

Article 13 Transparency and provision of information to users

1. High-risk AI systems shall be designed 1. High-risk AI systems shall be designed and We suppose that the user should have access to and developed in such a way to ensure that their developed in such a way to ensure that their logs to the extent that the others’ rights are operation is sufficiently transparent to enable operation is sufficiently transparent to enable secure.
users to interpret the system’s output and use it users to interpret the system’s output and use it appropriately. An appropriate type and degree of appropriately. An appropriate type and degree of transparency shall be ensured, with a view to transparency shall be ensured, with a view to achieving compliance with the relevant obligations achieving compliance with the relevant obligations of the user and of the provider set out in Chapter of the user and of the provider set out in Chapter
3 of this Title. 3 of this Title. The user shall be provided access
to their respective records, specified in Article 12,
except for the information that is covered by
copyright or other intellectual property rights or is
otherwise protected by law.

Article 13 Transparency and provision of We believe that providing the AI systems users information to users with information about known bugs and temporary
measures of countering them could enhance the
[1.] [1.] AI systems' stability and security.

2. High-risk AI systems shall be 2. High-risk AI systems shall be accompanied by instructions for use in an accompanied by instructions for use in an appropriate digital format or otherwise that include appropriate digital format or otherwise that include
concise, complete, correct and clear information concise, complete, correct and clear information that is relevant, accessible and comprehensible to that is relevant, accessible and comprehensible to users. users.

3. The information referred to in paragraph 2 3. The information referred to in paragraph 2 shall specify: shall specify:

[(a)–(e)] [(a)–(e)]

(f) known accuracy, robustness and cybersecurity
issues, errors, faults or inconsistencies, and other
technical issues, "bugs" etc., followed be
recommended ways to avoid their effect before
they are fixed, except the information that meets
all of the following criteria:

(1) the information is not general known and

(2) the information can be used to exploit the
system's vulnerabilities.

Article 15 Accuracy, robustness and cybersecurity Article 15 Accuracy, robustness and cybersecurity

[1.–2.] [1.–2.]

3. High-risk AI systems shall be resilient as 3. High-risk AI systems shall be resilient as Mechanisms in the log that permit the review, regards errors, faults or inconsistencies that may regards errors, faults or inconsistencies that may accept and/or discard of the self-learning results occur within the system or the environment in occur within the system or the environment in will improve the AI systems' stability and security which the system operates, in particular due to which the system operates, in particular due to and maintain due human control over their interaction with natural persons or other their interaction with natural persons or other self-learning.
systems. systems.
The robustness of high-risk AI systems may be The robustness of high-risk AI systems may be achieved through technical redundancy solutions, achieved through technical redundancy solutions,
which may include backup or fail-safe plans. which may include backup or fail-safe plans.
High-risk AI systems that continue to learn after High-risk AI systems that continue to learn after being placed on the market or put into service being placed on the market or put into service shall be developed in such a way to ensure that shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) are input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation duly addressed with appropriate mitigation measures. measures.
These AI systems shall also have mechanisms in
the log that permit the review, accept and/or
discard of the self-learning results.

Article 20 Automatically generated logs

1. Providers of high-risk AI systems shall keep 1. Providers of high-risk AI systems shall keep The proposals in question are aimed at ensuring the logs automatically generated by their high-risk the logs automatically generated by their high-risk the possibility to use logs in the course of judicial
AI systems, to the extent such logs are under their AI systems, to the extent such logs are under their and non-judicial legal protection.
control by virtue of a contractual arrangement with control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of kept for a period that is appropriate in the light of the intended purpose of the high-risk AI system the intended purpose of the high-risk AI system and applicable legal obligations under Union or and applicable legal obligations under Union or national law. national law.
This period shall be not less than 3 years and not
less than the respective prescriptive period,
counting from the moment the SI system output
had or should have had an effect.

[2.] [2.]

3. Providers shall prolong log keeping in situations
where claims arise during the prescriptive period
requiring those logs to be accessed.
Article 29 Obligations of users of high-risk AI The proposals in question are aimed at ensuring systems the possibility to use logs in the course of judicial
and non-judicial legal protection.
[1.–4.] [1.–4.]

5. Users of high-risk AI systems shall keep the 5. Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI logs automatically generated by that high-risk AI system, to the extent such logs are under their system, to the extent such logs are under their control. The logs shall be kept for a period that is control. The logs shall be kept for a period that is appropriate in the light of the intended purpose of appropriate in the light of the intended purpose of the high-risk AI system and applicable legal the high-risk AI system and applicable legal obligations under Union or national law. obligations under Union or national law.

This period shall be not less than 3 years and not
less than the respective prescriptive period,
counting from the moment the SI system output
had or should have had an effect.

Users shall prolong log keeping in situations
where claims arise during the prescriptive period
requiring those logs to be accessed.

Users that are credit institutions regulated by
Users that are credit institutions regulated by Directive 2013/36/EU shall maintain the logs as
Directive 2013/36/EU shall maintain the logs as part of the documentation concerning internal part of the documentation concerning internal governance arrangements, processes and governance arrangements, processes and mechanisms pursuant to Article 74 of that mechanisms pursuant to Article 74 of that Directive.
Directive.

Chapter 4 framework for notifying authorities and Article 33 Missed opportunity here to strengthen obligations
notified bodies 1. Notified bodies shall verify the conformity of notifying authorities and notifying bodies.
Articles 30 and 33 of high-risk AI systems in accordance with See the Model Law, Section 31 and 32
the conformity assessment procedures
referred to in Article 43 as well as the
structural ability of the provider to fulfil
these conditions and obligations on the
basis of clear, predetermined pass/fail
criteria.

[2.-12.]

13. Notified bodies shall inform other notifying
bodies and the notifying authority of any
withdrawn certificates and issues or questions
that might be relevant for other providers.

14. Notified bodies shall seek to align their
practices with other notified bodies. Notified
bodies shall inform the notifying authority of
doubtful practices of other notified bodies.

15. Notified bodies shall publish a register of
certificates issued.

Article 33 Notified bodies Article 33 Notified bodies The proposal in question is aimed at ensuring
extra independence of notified bodies from the
[1.–3.] [1.–3.] respective market stakeholders, and notified
bodies impartiality.
4. Notified bodies shall be independent of the 4. Notified bodies shall be independent of the provider of a high-risk AI system in relation to provider of a high-risk AI system in relation to which it performs conformity assessment which it performs conformity assessment activities. Notified bodies shall also be activities. Notified bodies shall also be independent of any other operator having an independent of any other operator having an economic interest in the high-risk AI system that is economic interest in the high-risk AI system that is
assessed, as well as of any competitors of the assessed, as well as of any competitors of the provider. provider. Notified bodies shall not provide
commercial activity as provider, authorised
representative, importer, distributor, or operator of
high-risk AI systems, except for AI systems whose
intended purpose does not go beyond maintaining
the activity of the notified body as such.

Article 45 Appeal against decisions of notified Article 45 Appeal against decisions of notified The proposal in question is aimed at ensuring bodies bodies access to judicial forms of legal protection.

Member States shall ensure that an appeal Member States shall ensure that an appeal procedure against decisions of the notified bodies procedure against decisions of the notified bodies is available to parties having a legitimate interest is available to parties having a legitimate interest in that decision. in that decision. Judicial appeal procedure should
be available anyway.

Article 48 EU declaration of conformity

1. The provider shall draw up a written EU 1. The provider shall draw up a written EU We believe that AI systems’ marketing lifecycle declaration of conformity for each AI system and declaration of conformity for each AI system and could exceed 10 years. That means that the keep it at the disposal of the national competent keep it at the disposal of the national competent 10-years period in question must not bottleneck authorities for 10 years after the AI system has authorities for 10 years after the AI system has the regulatory measures to ensure AI systems been placed on the market or put into service. been placed on the market or put into service. safety. Thus we propose to widen the period of
The EU declaration of conformity shall identify the The EU declaration of conformity shall identify the the duty in question to the extent of the respective
AI system for which it has been drawn up. A copy AI system for which it has been drawn up. A copy AI system’s actual lifecycle.
of the EU declaration of conformity shall be given of the EU declaration of conformity shall be given to the relevant national competent authorities to the relevant national competent authorities upon request. upon request.

The provider shall draw up a written EU
declaration of conformity for each AI system and
keep it at the disposal of the national competent
authorities for a period of more than 10 years after
the AI system has been placed on the market or
put into service where either (1) the respective AI
system is being supported or (2) the respective AI
system continues to be on the primary market or
(3) the provider receives fees for the use of the
respective AI system.

Article 50 Document retention

The provider shall, for a period ending 10 years The provider shall, for a period ending 10 years We believe that AI systems’ marketing lifecycle after the AI system has been placed on the after the AI system has been placed on the could exceed 10 years. That means that the market or put into service, keep at the disposal of market or put into service, keep at the disposal of 10-years period in question must not bottleneck the national competent authorities: the national competent authorities: the regulatory measures to ensure AI systems
safety. Thus we propose to widen the period of
(a) the technical documentation referred to in (a) the technical documentation referred to in the duty in question to the extent of the respective
Article 11; Article 11; AI system’s actual lifecycle.

(b) the documentation concerning the quality (b) the documentation concerning the quality management system referred to Article 17; management system referred to Article 17;

(c) the documentation concerning the (c) the documentation concerning the changes approved by notified bodies where changes approved by notified bodies where applicable; applicable;

(d) the decisions and other documents issued (d) the decisions and other documents issued by the notified bodies where applicable; by the notified bodies where applicable;

(e) the EU declaration of conformity referred (e) the EU declaration of conformity referred to in Article 48. to in Article 48.
The provider shall continue to keep those
documents at (a)-(e) at the disposal of the
national competent authorities for a period of
more than 10 years after the AI system has been
placed on the market or put into service where
either (1) the respective AI system is being
supported or (2) the respective AI system
continues to be on the primary market or (3) the
provider receives fees for the use of the
respective AI system.

Article 52 Transparency obligations for certain AI systems

1. Providers shall ensure that AI systems 1. Providers shall ensure that AI systems We suppose people should have extra awareness intended to interact with natural persons are intended to interact with natural persons are about interacting with an AI system if the latter is designed and developed in such a way that designed and developed in such a way that designed to provide decisions or information or natural persons are informed that they are natural persons are informed that they are recommendations for decisions.
interacting with an AI system. This obligation shall interacting with an AI system, unless this is not apply to AI systems authorised by law to obvious from the circumstances and the context detect, prevent, investigate and prosecute of use. This obligation shall not apply to AI criminal offences, unless those systems are systems authorised by law to detect, prevent, available for the public to report a criminal investigate and prosecute criminal offences, offence. unless those systems are available for the public
to report a criminal offence.

If an AI system is used to provide decisions or
information or recommendations for decisions,
this obligation shall apply regardless of the fact of
interacting with an AI system is obvious from the
circumstances and the context of use.
Article 71 Penalties
This specification aimed at countering the
[1.–5.] [1.–5.] possible attempts of companies to mitigate their
own responsibility by carrying out reorganisation,
6. When deciding on the amount of the 6. When deciding on the amount of the wherein a predecessor legal person is substituted administrative fine in each individual case, all administrative fine in each individual case, all by a successor legal person.
relevant circumstances of the specific situation relevant circumstances of the specific situation shall be taken into account and due regard shall shall be taken into account and due regard shall be given to the following: be given to the following:
(a) the nature, gravity and duration of the (a) the nature, gravity and duration of the infringement and of its consequences; infringement and of its consequences;
(b) whether administrative fines have been (b) whether administrative fines have been already applied by other market surveillance already applied by other market surveillance authorities to the same operator for the same authorities to the same operator (or to its infringement. predecessors, if the predecessors’ share in a
(c) the size and market share of the operator current company equals or is bigger than 50%) for committing the infringement; the same infringement.
(c) the size and market share of the operator
committing the infringement;

ANNEX III ANNEX III
High-risk AI systems pursuant to Article 6(2) are High-risk AI systems pursuant to Article 6(2) are We find it useful to specify that the rule in the AI systems listed in any of the following the AI systems listed in any of the following question covers employment and workers areas: areas: management in the field of public service and
analogous.
[1.–3.] [1.–3.]

4. Employment, workers management and 4. Employment, workers management (including access to self-employment: those in the sphere of public service, municipal
service, justice, law enforcement bodies) and
(a) AI systems intended to be used for access to self-employment: recruitment or selection of natural persons,
notably for advertising vacancies, screening or (a) AI systems intended to be used for filtering applications, evaluating recruitment or selection of natural persons, candidates in the course of interviews or tests; notably for advertising vacancies, screening or
filtering applications, evaluating
(b) AI intended to be used for making decisions candidates in the course of interviews or tests; on promotion and termination of work-related contractual relationships, for task (b) AI intended to be used for making decisions allocation and for monitoring on promotion and termination of and evaluating performance and behaviour of work-related contractual relationships, for task persons in such relationships. allocation and for monitoring
and evaluating performance and behaviour of
persons in such relationships.

ANNEX III ANNEX III
High-risk AI systems pursuant to Article 6(2) are High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following the AI systems listed in any of the following areas: areas:

[1.–7.] [1.–7.]
We consider the amended spheres to be crucially
8. Administration of justice and democratic 8. Law, administration of justice and democratic important, so the relevant AI should be processes: processes: considered high-risk.
(a) AI systems intended to assist a judicial (a) AI systems intended to assist a judicial authority in researching and authority in researching and interpreting facts and the law and in applying the interpreting facts and the law and in applying the law to a concrete set of facts. law to a concrete set of facts;

(b) AI systems intended to automatically assign
judges to cases;

(c) AI systems intended to automatically assign
public officers to non-disputed cases;

(d) AI systems intended to make decisions
and/or provide proposals of decisions which
directly affect a person’s legal status, including
emerging, modifying or terminating legal rights
and duties;

ANNEX III
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas: [1.-7.]

9. Public finances, taxes and customs, banking We suppose these spheres, while not covered by
and transactions ANNEX III, may be elaborated as well or perhaps
a reference to the respective regulation should be
cited.

This text has been automatically transcribed for accessibility. It may contain transcription errors. Please refer to the source file for the original content.

Make a general comment

See our answers in the uploaded documents