Czech EU Presidency proposes restricted classification of high-risk systems –

A new partial compromise on the Artificial Intelligence (AI) Act, seen by EURACTIV on Friday (16 September), further elaborates the concept of “additional level” who would qualify an artificial intelligence as high-risk only if it has a significant impact on decision-making.

The AI ​​Act is a landmark proposal to regulate artificial intelligence in the EU following a risk-based approach. Therefore, the high risk category is a key element of the regulations, as these are the categories with the greatest impact on the safety of people and on fundamental rights.

On Friday, the Czech Presidency of the Council of the EU circulated the new compromise, which aims to provide an answer to outstanding questions relating to the categorization of high-risk systems and the resulting obligations for AI providers. .

The text focuses on the first 30 articles of the proposal and also covers the definition of AI, the scope of the regulation and prohibited AI applications. The document will serve as the basis for a technical discussion at the meeting of the “Telecommunications and information society” group on 29 September.

Classification of high-risk systems

In July, the Czech EU Presidency proposed to add an additional level to determine whether an AI system carries high risks, namely the condition that the high-risk system plays a significant role in decision-making. final decision.

The central idea is to increase legal certainty and prevent AI applications that are “purely incidental” decision-making to fall within the scope. The Czech EU Presidency wants the European Commission to define the concept of “purely incidental” through an implementing act within one year of the entry into force of the Regulation.

The principle that a system capable of making decisions without human control will be considered high-risk has been removed because “not all automated AI systems are necessarily high-risk, and because such a provision could be circumvented by putting a human in the middle “.

In addition, the text states that when the EU executive updates the list of high-risk applications, it must take into account the potential benefits that AI may have for individuals or society in general, rather than the potential harm alone. .

The Czech EU Presidency did not modify the high-risk categories listed in Annex III, but carried out a significant reformulation. In addition, the text now explicitly states that the conditions allowing the Commission to remove applications from the high-risk list are cumulative.

Requirements for high-risk systems

In the section on risk management, the Czech EU Presidency changed the wording to exclude the possibility of identifying risks related to high-risk systems through testing, this practice should only be used for verify or validate mitigation measures.

The changes also give more leeway to the competent national authority to assess what technical documentation is needed for SMEs supplying high-risk systems.

With respect to human review, the proposed regulations require at least two people to oversee high-risk systems. The Czechs, however, offer an exception to “Four Eyes Principles”— ie the control of an operation by two people as a safety measure. The exception would be for AI applications in the field of border control, where EU or national legislation allows it.

For financial institutions, the compromise foresees that the quality management system they should put in place for high-risk use cases can be integrated with the one already in place to comply with existing sectoral legislation in order to to avoid duplication.

Financial authorities would also have market enforcement powers under the AI ​​Regulation, including carrying out ex-post surveillance activities that can be integrated into the existing enforcement mechanism of EU financial services legislation.


The Czech EU Presidency kept most of the changes it had made to the definition of AI, but removed the reference to the requirement for AI to follow goals “man-made”considering that she was “non-essential”.

The text now specifies that the life cycle of an AI system would end if it was withdrawn by a market surveillance authority or if it undergoes a substantial modification, in which case it should be considered a new system.

The compromise also introduced a distinction between the user and the one who controls the system, who is not necessarily the same person as the one affected by the AI.

The Czechs also completed the definition of machine learning by specifying that it is a system capable of learning but also of deducing data.

Furthermore, the previously introduced concept of autonomy of an AI system has been described as “the degree to which such a system operates without outside influence. »


The Czechs have introduced a more direct exclusion of research and development activities related to AI, “including also with respect to the exception for national security, defense and military purposes”indicates the explanatory part.

The essential part of the text on general-purpose AI will be reserved for the next compromise.

Prohibited practices

The part on prohibited practices, a sensitive issue for the European Parliament, does not arouse controversy among the Member States which have not asked for major changes.

At the same time, the preamble to the text more precisely defines the concept of AI manipulation techniques as solicitations that go “beyond human perception or other subliminal techniques that subvert or infringe the person’s autonomy […]for example in the case of direct neural interfaces or virtual reality”.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *