French EU presidency wants alignment with new legislative framework – EURACTIV.com

France proposes several changes to the law on artificial intelligence (AI) in order to obtain a better alignment of this legislative framework. The changes relate in particular to the designation of competent authorities and the database on high-risk AI.

The French presidency, which leads work in the Council of the EU, shared a new compromise text on Monday (April 25) which will be discussed with representatives of other member states during the telecommunications working group on Thursday.

Notified bodies and competent authorities

Notified Bodies will play a crucial role in AI law enforcement, as they will be appointed by EU countries to assess AI systems for compliance with EU rules before they are launched on the market. market.

The new text makes explicit reference to the EU regulation establishing requirements for accreditation and market surveillance, and a reference to the fact that these bodies will have to respect confidentiality obligations has been added.

A new article has been introduced to define the mode of operation of notified bodies, in particular for the conformity assessment of high-risk systems. The new article includes provisions indicating how notified bodies must collaborate with the notifying authority, the national authority in charge of overseeing the whole conformity assessment process.

If the national authority has “sufficient reasons to consider” that a notified body is not fulfilling its obligations, it must take appropriate measures proportionate to the level of non-compliance, in particular by restricting, suspending or withdrawing notifications to that body.

To become a notified body, there will be an application procedure managed by the competent authority. National authorities will also be able to designate bodies outside this procedure, but in this case they will have to provide the Commission and the other Member States with documents proving the competence of the body and how it will meet the relevant requirements.

Similarly, the Commission may challenge the competence of the notified body. A sentence has been added to give the EU executive the power to suspend, restrict or withdraw notification in certain cases, through secondary legislation, in line with similar provisions in the Medical Devices Regulation.

A significant part of the AI ​​law will be implemented through harmonized standards, which will indicate how the general concepts of the regulation, such as” equity “ and the ” security “apply in practice to artificial intelligence.

When a system meets these standards, it is presumed to comply with the regulations. Similarly, a new article specifies that this presumption of conformity also applies to notified bodies which follow harmonized standards.

Notified bodies across the bloc will have to coordinate their conformity assessment procedure within a specific working group, but only for high-risk AI systems. National authorities will ensure that collaboration between agencies is effective.

It is also possible that conformity assessment bodies from a country outside the EU may be authorized by a national authority to carry out the same work as an EU notified body, provided that they comply with the same requirements defined in the regulations.

Flexibility for countries

The text concerning the designation of competent national authorities has been modified in order to give EU countries more flexibility to organize these authorities according to their needs, provided that the principles of objectivity and impartiality are respected.

The French presidency proposes to make the provisions relating to the way in which EU countries must inform the Commission of this designation process less restrictive.

The part concerning adequate resources for the competent authorities has also been made less restrictive. The frequency of resource status reports from national authorities has been reduced.

High Risk Database and Reports

The article on the EU database of high-risk AI systems now also covers high-risk systems tested under real-world conditions as part of a regulatory sandbox (regulatory sandbox), and provides that the “prospect provider” must insert them into the database.

In addition, a new paragraph indicates that information from the EU database on high-risk systems already on the market must be publicly available. For systems that are tested in a regulatory sandbox, the information will not be made public unless the vendor consents. The text specifies that the EU database will not contain personal data.

The article on the supervision of systems already launched on the market has been clarified to apply only to high-risk systems. Suppliers have greater flexibility to collect, document and analyze the relevant data needed to assess compliance.

Malfunction of high-risk systems has been excluded from serious incident reporting requirements. At the same time, the text expands the possibility of covering those reporting obligations to financial institutions that might be included in the scope of high-risk systems at a later stage.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *