The European Commission is to present a liability regime for damage arising from artificial intelligence (AI) that would put the presumption of causation on the defendant, according to a draft text obtained by EURACTIV.
The AI liability directive is expected to be released on September 28. It aims to complement the Artificial Intelligence Legislation, an upcoming regulation that introduces requirements for AI systems based on their level of risk.
“This directive provides in a very targeted and proportionate way for reductions in the burden of proof through the use of disclosure and rebuttable presumptions”can we read in the project.
“These measures will help people seeking compensation for damages caused by AI systems to manage the burden of proof, so that justified liability claims can succeed. »
The proposal follows a European Parliament resolution adopted in October 2020 which called for easing the burden of proof and establishing a strict liability regime for AI-based technologies.
For consistency, definitions of AI systems, including those with significant risk, providers and users of those systems, are directly referenced in the AI legislation.
The guideline applies to non-contractual civil law claims for damage caused by an AI system in fault-based liability regimes, i.e. where someone can be held responsible for an action or a specific omission.
The idea is that these provisions would complement the existing liability regimes in civil law since, apart from the presumption, the directive would not modify the national rules relating to the burden of proof, the degree of certainty required for the standard of proof or the the definition of fault.
While criminal law or transport-related liabilities are excluded from the scope, the provisions would also apply to national authorities to the extent that obligations cover them under AI legislation.
Disclosure of information
A potential petitioner may ask vendors of a high-risk system to disclose information that the vendor will be required to retain as part of its obligations under AI legislation. The AI regulation mandates the retention of documentation for ten years after an AI system has been placed on the market.
The information requested would include data sets used to develop the AI system, technical documentation, log files, quality management system, and any corrective actions.
Recipients can deny the request, which can then be retried in a legal action, where a judge will assess whether it is justified and necessary to support a claim for compensation in the event of an accident involving AI.
These disclosures are covered by safeguards and the principle of proportionality, including trade secrets. The court may also require the provider to keep this information for as long as it deems necessary.
If a supplier refuses to comply with a request for disclosure, the court will consider that the supplier has failed to comply with the relevant obligations, unless the defendant proves otherwise.
Non-compliance with AI legislation
The directive aims to provide a legal basis for seeking compensation following a breach of specific obligations set out in the EU AI Regulation.
Since a causal link between non-compliance and harm can only be established by explaining the inner workings of AI, the approach is to presume the causal link in certain circumstances.
For AI systems that do not present a particular level of risk, the presumption applies if it is demonstrated that there was a breach of the rules that could have prevented the damage and if the defendant is responsible for this disrespect.
For high risk systems, the presumption applies against the vendor where appropriate risk management measures were not in place, the training dataset did not meet the quality requirements or that the system does not meet the criteria of transparency, accuracy, robustness and cybersecurity.
Other factors include the lack of adequate human control or negligence in the immediate implementation of necessary corrective actions.
The presumption applies to users of high-risk systems where users have failed to follow accompanying instructions or the system has been exposed to input data unrelated to the purpose of the system.
In other words, the onus would be on the AI system provider that violated the rules to prove that its non-compliance did not cause the harm by demonstrating that there are more plausible explanations for the harm.
Non-compliance with other requirements
A similar principle has been introduced for the violation of other European or national requirements. Also in this case, the presumption of causation only applies to cases where the non-compliance with the so-called “duty of vigilance” is relevant for the damage in question and aims to prevent it.
In this case, the conditions of the presumption are that the AI system can be “reasonably assumed” be involved in creating the damage, and that the plaintiff has demonstrated non-compliance with the relevant requirements.
For the Commission, this approach “constitutes the least restrictive measure to meet the need for fair compensation for the victim, without externalizing the cost to the latter”.
Follow-up and transposition
The EU executive must establish a monitoring program for incidents involving AI systems, with a targeted review within five years to assess whether additional measures would be necessary.
Member States will have two years to transpose the directive into national law from its entry into force. As part of their transposition, Member States may adopt national rules that are more favorable to applicants, provided that they are compatible with Community law.