MEPs present new compromise on obligations for high-risk AI systems –

Lawmakers leading discussions on the artificial intelligence (AI) law have presented a compromise on obligations for high-risk AI systems and a consolidation of earlier text, according to documents obtained by EURACTIV.

Co-rapporteurs Brando Benifei and Dragoș Tudorache circulated new compromise amendments last week. They will be discussed at a technical meeting on Tuesday (August 30). The main MEPs in charge of the dossier have so far tried to make progress on the less controversial part of the text, in order to make some progress before tackling the most contested aspects of the proposal.

This strategy appears to have paid off, as previous compromise amendments have been maintained with only minor changes. At the same time, the co-rapporteurs have also begun to sort out the part of the obligations to which suppliers and users will be subject concerning high-risk systems.

We are having constructive discussions so far. This week, we will resume at full speed ”a European Parliament official told EURACTIV.

High Risk Bonds

According to the new text, the quality management system that AI suppliers will have to implement for high-risk systems can be integrated into existing systems, created to meet EU sectoral rules such as the Regulation European concerning medical devices.

The articles on the obligation to draw up technical documentation and on conformity assessment have been deleted as they were considered too repetitive in relation to other articles.

MEPs say AI providers should keep records automatically generated by their AI systems for at least six months, unless EU or national law dictates otherwise. Longer periods may be warranted depending on industry standards or the purpose of the system.

If the AI ​​provider believes that one of its high-risk systems does not comply with the regulation, it must act without delay and remove it from the market and deactivate it.

In addition, new wording has been added indicating that suppliers must immediately inform distributors and, where appropriate, other actors in the value chain of any non-compliance and any corrective action. They must also inform the national market surveillance authorities and the notified body which checked the system of the non-compliance and of any action taken.

High-risk AI users must also be prepared to cooperate with national authorities, the European AI Council and the European Commission to prove compliance of their system, a measure so far limited to suppliers.

In the event of a reasoned request from the European executive or a market surveillance authority, providers or users must provide access to the automatically generated registers. All of these public bodies would be bound by confidentiality.

Allocation of responsibility

These amendments broadly aim to address the issue of division of responsibilities in the complex AI supply chain. It is interesting to note that the amendment on cooperation with the authorities effectively puts “on hold” those relating to governance and general-purpose AI, the latter being a hotly debated topic.

Conservative MEPs have pushed for specific provisions to be built into general-purpose AI, meaning systems that can be trained to perform different tasks. The question is how the supplier can, in this case, be held responsible if he does not even know for what purpose the end systems will be intended.

Furthermore, the compromise aims to clarify the responsibilities of importers and distributors of high-risk AI systems, including how they will work with national authorities and how to mitigate unforeseen risks.

As a precondition for entering the EU market, AI providers will have to appoint an authorized representative to ensure that the conformity assessment procedure has been carried out and to make available to the competent national authority the relevant documentation, such as the declaration of conformity.

A comment on the sidelines of the document says Social Democrats continue to push for all high-risk users to be registered in the public database, not just public authorities.

Administrative procedures

MEPs also reversed their previous compromise on the role of notified bodies, private companies responsible for verifying the compliance of high-risk systems, as well as notifying authorities, the national authorities which supervise bodies.

The new text aims to ensure consistent administrative procedures across the bloc to remove potential obstacles. Thus, the procedure for the assessment and control of conformity assessment bodies must be approved by all the national authorities concerned.

Furthermore, the new compromise deleted the paragraph on the“slipper” indicating that it needed further discussion. The provision would bar a Notified Body employee from working with an AI vendor for one year after the body audits one of the vendor’s systems.

In addition, following the proposal of Conservative MEPs, the text now clarifies the procedure allowing conformity assessment bodies in third countries to be accredited via a conformity assessment system or a mutual recognition agreement.

Technical standards

In the article on harmonized standards, the European People’s Party (EPP) obtained the addition of a reference to trusted AI to be taken into account during the standardization process.

The articles on presumption of conformity and conformity assessment will be discussed at another technical meeting. The next technical meeting, held on September 2, will also determine whether the retention period for technical documents should be based on a specific schedule or on the entire product life cycle.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *