EU Council presidency proposes significant changes to proposed AI law – EURACTIV.com

The Slovenian presidency has circulated a compromise text on the EU AI bill, including major changes in the areas of social scoring, biometric recognition systems and high-risk applications, while identifying future talking points.

The rotating Presidency of the Council of the EU shared a first compromise text on Monday 29 November to accompany a progress report on the European law on AI.

Scope and definitions

In the progress report, seen by EURACTIV, EU countries reaffirm their exclusive jurisdiction over national security, and insist that AI (artificial intelligence) systems developed exclusively for military purposes should be removed from the scope of the regulation.

AI systems developed for the sole purpose of scientific research and development were also excluded from the scope.

The presidency elaborated the definition of AI systems to better distinguish them from traditional software programs. AI systems are therefore considered to have the ability to process data or other types of inputs “to infer how to achieve a given set of human-defined goals by learning, reasoning, or modeling”according to the compromise.

An AI provider is now defined as an individual or an organization “who has an AI system developed and who places this system on the market or puts it into use”. Suppliers will be responsible for ensuring compliance with the requirements of the regulations.

A new class of AI system “general purpose” has been added, which should not be considered to fall within the scope of the regulation, unless the system is trademarked or integrated into another system subject to the regulation.

Social rating

The Commission’s proposal includes a ban on AI applications deemed to pose unacceptable risks. One of them is social scoring or social rating, a practice initiated in China that is considered to promote mass surveillance.

The presidency now proposes to extend the ban on social rating of public authorities to private entities. In addition, the definition of prohibition has also been extended to include the operation of a social or economic situation. »

These changes could have profound implications for the financial sector, as, for example, loan interest rates are currently calculated based on the probability of repayment.

The use of AI systems for estimating insurance premiums was also included in high-risk systems.

Biometric recognition

The biometric identification systems covered by the legislation are no longer defined as systems ” from a distance “but like any system for identifying people without their consent.

The possibility of using real-time biometric identification systems has been extended to actors who are not law enforcement authorities but who collaborate with them. The reason for using these systems has been extended to the protection of critical infrastructure.

Biometric systems can only be used with the consent of the judicial authority. In urgent cases, the initial proposal provided that authorization could also be requested ex post.

On the other hand, according to the new text, the authorization must “be requested without undue delay during its use, and if this authorization is refused, its use is terminated with immediate effect”.

High risk systems

The AI ​​law introduces specific obligations that present a high risk in terms of health, safety and fundamental rights. In its proposal, the Commission has identified eight high-risk areas, which cannot be changed but only defined more precisely in the future.

The most significant change to the list of high-risk systems is the inclusion of digital infrastructure intended to protect the environment, including “AI systems intended for use in controlling emissions and pollution. »

In the area of ​​law enforcement, the sub-category of crime analysis has been removed.

The compromise text foresees that the European Commission will have to evaluate the list of high-risk systems every two years, as well as the list of AI techniques and approaches covered by the regulation.

Outstanding issues

The progress report foresees a number of areas that should require further discussion.

The requirements for high-risk systems are flagged as being vague and requiring practical guidance to facilitate business compliance. The examples given relate to how data quality and transparency obligations could be fulfilled in practice.

In addition, several EU countries pointed out that the requirement for complete and error-free data could be largely unrealistic. “A number of delegations stressed that while this should be the case to the greatest extent possible, it should not be an absolute requirement”can we read in the report.

Several Member States also underlined the complexity of the value chain, “where the boundaries between the different actors are not always clearly delineated. » As a result, the division of responsibilities may need to be reassessed to better reflect the reality of AI value chains.

Excessive administrative burden for SMEs was a recurring theme in the discussions, an issue that was also raised at the last EU Heads of State Summit.

Concerns have also been expressed about the relationship between the AI ​​Act and other pieces of EU legislation in order to avoid conflicting legislation, particularly in terms of privacy protection, application of law, product safety and other sectoral legislation.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *