European Union: companies held liable for damage caused by AI?

The European Union (EU) is rolling out new rules to make it easier to sue companies specializing in artificial intelligence (AI) for harm. This newly unveiled bill – which should be ratified within a few years – is part of Europe’s efforts to prevent AI developers from offering dangerous systems. It is not completely convincing: on the one hand, technology companies are complaining about this bill, which could have a deterrent effect on innovation. On the other hand, those who fight for the defense of consumers believe that it does not go far enough.

Powerful AI technologies are increasingly shaping our lives, our relationships and our societies and their harmful effects are well documented. Social media algorithms promote misinformation, facial recognition systems are very often discriminatory, and predictive AI systems used to approve or reject loan applications can disadvantage minorities.

The new draft law dubbed the “AI Liability Directive” will reinforce the European law on artificial intelligence which is due to come into force soon. The latter requires additional controls for “high risk” uses of AI, i.e. those that are most likely to harm people. These include systems used for policing, recruitment or health services.

The new bill aims to give consumers and businesses the right to sue, to claim damages, after being harmed by an AI system. The aim is to hold the developers, producers and users of these technologies accountable and to oblige them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk being the subject of Europe-wide class action lawsuits.


How Instagram filters shape our view of beauty

For example, job seekers who can prove that they have been discriminated against by an AI system used for CV screening can ask a court to force the AI ​​company to give them access to job information. system. This way, they can identify those responsible and find out what went wrong. Armed with this information, they can sue.

The bill still has to go through the legislative process of the European Union. A process that should take two years. Until then, the proposal will be amended by Members of the European Parliament and EU governments. It will also likely be the subject of intense lobbying from tech companies who say such rules could have a “chilling” effect on innovation.

Whether successful or not, this new European legislation will have an impact on how AI is regulated around the world.

In particular, the bill could have a negative impact on software development, according to Mathilde Adjutor, head of European policy for the technology lobbying group CCIA, which represents companies such as Google, Amazon and Uber.

Under the new rules, “developers not only risk being held liable for software bugs but also for the potential impact on the mental health of users,” she says.

For her part, Imogen Parker, associate director of policy at the Ada Lovelace Institute, an AI research organization, believes that this bill will help shift the power from companies to consumers. This is a correction that she considers particularly important given the potential for discrimination that AI can cause. In addition, the bill will ensure that there is – when an AI system causes harm – a common way to seek redress across the EU, says Thomas Boué, European policy manager at tech lobby BSA, which counts among its members Microsoft and IBM.

However, some consumer rights organizations and activists say the proposals don’t go far enough and don’t make it easy enough for consumers to file complaints.

Ursula Pachl, deputy director-general of the European Consumers’ Organisation, says the proposal is a “genuine disappointment” because the onus is on consumers to prove that an AI system has harmed them or that an AI developer IA was negligent.

“In a world where ‘black box’ type AI systems are extremely complex and obscure, it will be virtually impossible for the consumer to use the new rules,” says Ursula Pachl. For example, it will be extremely difficult to prove that racial discrimination against a person is due to the way a credit rating system was set up.

The bill also fails to take into account consequential harms caused by artificial intelligence systems, adds Claudia Prettner, EU representative at the Future of Life Institute, an NGO that focuses on the existential risks associated with climate change. ‘iA. A better version of the text would hold companies liable when their actions cause damage without necessarily requiring fault, like the rules that already exist for cars or animals, she continues.

“AI systems are often built for one purpose but then cause unexpected harm in another area. Social media algorithms, for example, were designed to maximize time spent on platforms but inadvertently favored polarizing content,” she says.


Did you hate this video? YouTube’s algorithm will probably suggest another one in the same style

The EU wants its AI law to become the global benchmark for regulating artificial intelligence. Other nations, such as the United States, where efforts are underway to regulate this technology, are watching the situation closely. The Federal Trade Commission is considering rules on how companies process data and build algorithms. It forced companies that collected data illegally to remove their algorithms. This year, for example, the agency forced weight-loss diet specialist Weight Watchers to do so after it illegally collected data on children.

Whether successful or not, this new European legislation will have implications for how AI is regulated around the world. “It is in the interest of citizens, businesses and regulators that the EU properly addresses the issue of liability in artificial intelligence. Without this, AI cannot be useful to people and to society,” says Imogen Parker.

Article by Mélissa Heikkilä, translated from English by Kozi Pastakia.


Amazon’s new plan to get you to adopt its Astro home robot

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *