The EU AI Act, a burden for the open source community

Still at the proposal stage, the future EU regulation on artificial intelligence is already sparking. Its application as described by the European Commission could harm the development of open source models created by independent developers.

The European Union’s proposed regulation on artificial intelligence is not to everyone’s taste. Made public in April 2021 by the European Commission, the document aims to harmonize the rules on AI. However, the project could impact more than one company and more particularly the creators of these AI models, namely open source developers. The American think tank Brookings, which has worked on the subject, raises this point. “While intended to enable safer use of these tools, this proposal would create legal liability for open source general-purpose AI (GPAI) models, which would harm their development. This could further concentrate power over the future of AI in large IT companies and prevent the research that is essential to the public’s understanding of AI.

In its draft regulation, the European Commission wants the developers of these AI systems to meet certain requirements in terms of risk management, data governance, technical documentation, transparency instructions, as well as accuracy and cybersecurity standards. Another point raised by the think tank: the definition of AI and these so-called general purpose models which remains rather vague. Thus, this lack of details could relate to the framework for this technology, which would take into account only some of the users concerned. If we are committed to giving a precise definition, it should be detailed: GPAI models consist of training learning models on large sets of data, using significant computing power and intended to perform multiple tasks, which may include image generation, language translation, moving a robotic arm, video games, and many other uses.

The concentration of AI power, a danger

In its expressed fears, Brookings believes that if a company decides to deploy an open source model and then encounters difficulties in operating it, due to unforeseen or uncontrollable effects, it could go after the developer itself and sue legal action against him. Considering this possibility also leads to consider another situation: one where the developers themselves, faced with so much pressure and rules, think twice before releasing open source code. If the open source community falters, then big tech companies would have a hand in developing and driving AI, creating a form of market oligopoly.

As the field of artificial intelligence is already very popular with global IT companies, cutting the rug out from under the open source developer community would only be another way to slow the development of such initiatives, which are expensive and require rare technical skills. In an interview given to TechCrunch, Oren Etzioni, founding CEO of the Allen Institute for AI, expressed concern. “Free software developers should not be subject to the same constraints as those who develop commercial software. It should always be possible to provide free software “as is”. Take the case of a student who develops an AI capability: he does not have the means to comply with European regulations and may be forced not to distribute his software, which has a chilling effect on academic progress. and the reproducibility of scientific results,” he explains.

The development of harmful uses of AI

According to Oren Etzioni, EU regulators should “focus more on specific AI applications”. To date, AI is not regulated and the uses resulting from it are multiplying at lightning speed, for purposes that are sometimes harmful to humans and society. The development of different forms of disinformation, bots, deep fakes, are today as many dangers with which the whole of society is confronted without this being regularized. What is still only a proposal today must be validated soon by the European Parliament as well as the Council of the EU, which can decide to stick to global regulations or follow another path and attach to the regulation of specific AI applications.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *