By 2026, 90% of online content will be produced by AI, experts say

⇧ [VIDÉO] You might also like this partner content (after ad)

According to a report by Europol’s innovation laboratory, greater vigilance will be required in the years to come as to the veracity of the content we consult online. Indeed, experts estimate that up to 90% of online content could be artificially generated by 2026, paving the way for growing disinformation, which will in turn foster the proliferation of crimes based on the use of “deepfakes”. “.

Today, in most cases, artificial media — either media generated or manipulated using artificial intelligence — is generated for entertainment purposes, to improve services or to improve quality of life. But the proliferation of these artificial media and improved technologies have given rise to opportunities for misinformation, warns the report by Europol’s Innovation Lab, titled ” Faced with reality? Law enforcement and the challenge of deepfakes “.

Advances in artificial intelligence, coupled with the public availability of large databases of images and videos, are driving an increase in the volume and quality of deepfakes. This report provides a detailed overview of the criminal use of fake content, and lists the challenges that law enforcement agencies now face in detecting and preventing its use. Tampering with evidence, fake pornographic videos or CEO fraud (a scammer pretending to be the CEO contacts the accounting department of a company and instructs them to make a transfer claiming a large financial transaction), these are the kinds of serious crimes made possible by deepfakes.

New technologies that create new threats

Deepfake (or hyperfaking) consists in superimposing existing video or audio files on other multimedia files for a malicious purpose. We change or add a person’s face and/or make them say things they didn’t say in order to harm them, give them a bad image. Some tricks are particularly convincing. ” Auditory and visual recordings of an event are often treated as a true account of an event. But what if these media can be artificially generated, adapted to show events that never happened, to distort events or to distort the truth? “, warn the authors of the report.

Deepfakes are based on increasingly sophisticated technologies (deep learning, generative adversarial networks, etc.) and are more realistic than ever (thanks to ever-increasing training data) and their impact on privacy will undoubtedly lead to the he emergence of new categories of offenses that will have to be monitored, warn the experts, who say they are “particularly concerned about the militarization of social media and the impact of disinformation on public discourse and social cohesion”.

As an example, the report mentions the fact that the United States exposed a Russian plot based on fake videos to justify the invasion of Ukraine (and this, before the start of the conflict). Then, after the invasion, Ukrainian government officials warned that Russia might release deepfakes showing President Volodymyr Zelenskyy surrendering. ” Examples like this show that this kind of misinformation can be dangerous. Its aim is to escalate existing conflicts and debates, undermine trust in public institutions and stir up anger and emotions in general. “, note the experts. However, this loss of confidence can make the work of the authorities much more difficult.

Experts lament the ease with which it is now possible to create a fake emergency alert warning of an impending attack or disrupt an election (or other aspects of politics) by broadcasting a fake recording of a candidate or other political figure. Much of the deepfake content created today is identifiable through manual methods, which rely on human operators identifying tell-tale signs in affected images and videos. But this is a task requiring a large and skilled workforce, which is not exploitable on a large scale.

A growing but little-known phenomenon

Another difficulty facing authorities is that even today the public itself seems uninformed of the inherent dangers of deepfakes. Research in 2019 showed that nearly 72% of respondents in a UK survey were unaware of deepfakes and their impact, the report said.

This figure is particularly worrying because it means that most people are unable to identify this fake media. But the results of more recent experiments have unfortunately shown that increased awareness of deepfakes does not necessarily increase the chances of detecting them. That is why specialists expect their use for malicious purposes to increase further in the coming years.

They also point out that the deployment of 5G will greatly benefit the creators of fake content: the additional bandwidth offered by 5G will allow them to exploit the power of cloud computing to manipulate video streams in real time. Deepfake technologies could potentially be applied in videoconferencing or live streaming contexts.

The digital world is evolving at a particularly sustained pace today, in a particularly fragile global socio-economic and climatic context. If everyone is not able to fully grasp the technologies involved (and what they make it possible), the main thing is to keep a critical mind and not to consider as systematically true all the information seen on the website.

At the same time, it is crucial that law enforcement agencies, online service providers and other organizations have the right skills and technologies if they are to keep up with the increasing rate of criminal use of deepfakes, the report points out. These preventative technologies include technical safeguards against video tampering (in the form of authenticity markers) and the creation of deepfake detection software (designed to spot mismatches between mouth dynamics and spoken words, inconsistencies in facial movements or natural changes in skin color). Prevention and detection of deepfakes must now be the priority of law enforcement.

Source: Europol

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *