The United States is pushing for a narrower definition of artificial intelligence (AI), a broader waiver for general-purpose AI, and individualized risk assessment in the AI regulations (AI Law), according to a document obtained by EURACTIV.
The non-paper is dated October 2022 and was sent to specific government officials in selected EU capitals and to the European Commission. It takes up much of the ideas and wording of the first comments sent to MEPs last March.
“Many of our comments are driven by our growing cooperation in this area within the framework of the EU-US Trade and Technology Council (TTC) and by our concerns that the proposed legislation could either promote or restrict further such cooperation”can we read in the document.
The document is a reaction to progress made by the Czech Presidency of the Council of the EU on AI regulation last month. A spokesperson for the US Mission to the European Union declined EURACTIV’s request for comment.
Definition of AI
Although the Americans supported the changes made by the Czech Presidency aimed at clarifying the definition of artificial intelligence, they warned that this definition “still includes systems that are not sophisticated enough to warrant special consideration under AI-driven legislation, such as systems based on manually-crafted rules”.
In order to avoid over-inclusiveness, the non-paper suggests the use of a narrower definition which follows the spirit of that provided by the Organization for Economic Co-operation and Development (OECD) and clarifies what is and is not is not included.
General Purpose AI
The non-paper recommends different liability rules for vendors of general-purpose AI systems — large models that can be adapted to perform various tasks — and for users of those models who might employ them for applications high risk.
The Czech Presidency has proposed that the Commission adapts the obligations of the AI Regulation to the specificities of general-purpose AI at a later stage through an implementing act.
On the other hand, the US administration warns that imposing risk management obligations on these suppliers could prove “very constraining, technically difficult and in some cases impossible”.
In addition, the non-paper argues against the idea that general-purpose AI vendors are required to cooperate with their users to help them comply with AI law, including with respect to the disclosure of confidential business information or trade secrets, subject however to appropriate safeguards.
The main vendors of general-purpose AI systems are large US companies such as Microsoft and IBM.
High risk systems
By classifying a use case as high-risk, the US administration has advocated for a more individualized risk assessment that should take into account the origins of the threat, vulnerabilities, the likelihood of occurrence of harm, and its significance.
In contrast, human rights should only be assessed in particular contexts. They also argued for an appeal mechanism for companies that believe they have been wrongly labeled as high risk.
With regard to international cooperation, Washington wants the standards of the National Institute of Standards and Technology (NIST) may provide an alternative means of compliance to self-assessments under the AI Regulation.
The non-paper also states that “In areas considered ‘high risk’ under the law, many US government agencies are likely to stop sharing rather than risk highly guarded methods being disclosed more widely than they wish. »
While the document expresses support for the Czech Presidency’s approach in adding an extra layer for the qualification of high-risk systems, it also warns of possible inconsistencies with the regulatory regime of the Medical Devices Regulation.
The United States is pushing for the European AI Committee to play a bigger role, collectively bringing together relevant national authorities from across the EU, rather than the authority of each country. They also propose the creation of a permanent sub-group within the Committee, composed of representatives of the stakeholders.
Since the Committee will be responsible for providing advice on technical specifications, harmonized standards and the development of guidelines, Washington would like wording to allow representatives from like-minded countries, at least in this sub-group .
The European Commission has been increasingly reticent towards third countries in standard-setting, while the United States is pushing for closer bilateral cooperation.
According to the non-paper, the regulation could prevent cooperation with third countries, as it covers public authorities located outside the EU that have an impact on the bloc, unless there is an agreement. international police and judicial cooperation.
The fear is that the US administration will end cooperation with EU authorities on border control management, which the artificial intelligence regulation considers separate from law enforcement activities. .
The reference to “chords” is also considered too narrow, since reaching binding agreements on AI cooperation could take years. Even existing law enforcement cooperation could suffer as it also takes place outside formal agreements.
Additionally, the non-paper suggests a more flexible exemption for the use of biometric recognition technologies in cases where there is a threat “credible”such as a terrorist attack. Indeed, strict wording could prevent practical cooperation aimed at ensuring the safety of major public events.
Origin of the code
In May, the French Presidency of the Council of the EU provided for the possibility for market surveillance authorities to be granted full access to the source code of high-risk systems when this is ” necessary “ to assess their compliance with AI rules.
Washington believes that what is ” necessary “ should be better defined, that a list of transparent criteria should be applied to avoid subjective and inconsistent decisions within the EU, and that the company should be able to appeal against the decision.