Report explores current malicious uses of AI to better understand the future of cybercrime

The exponential growth of communication technologies has led to the increase in cybercrime cases. In the research paper “Malicious Uses and Abuses of Artificial Intelligence”, security services provider Trend Micro, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol present the state in 2020 malicious and misuse of AI and ML technologies, but also plausible future scenarios in which cybercriminals could misuse these technologies for malicious purposes.

AI can bring huge benefits to society and help solve some of the biggest challenges we face today, but it also increases the risks of cybercrime.

The report provides law enforcement, policymakers, and other organizations with insights into existing and potential attacks leveraging AI and recommendations on how to mitigate those risks.

Edvardas Šileris, Head of Europol’s Cybercrime Center said:

“AI promises the world more efficiency, automation and autonomy. At a time when the public is increasingly concerned about the possible misuse of AI, we need to be transparent about the threats, but also consider the potential benefits of AI technology. This report will not only help us anticipate possible malicious and misuse of AI, but also proactively prevent and mitigate these threats. This is how we can unlock the potential of AI and benefit from the positive use of AI systems”.

The document warns that AI systems are being developed to improve the effectiveness of malware and disrupt anti-malware and facial recognition systems.

Martin Roesler, head of forward-looking threat research at Trend Micro, says:

“Cybercriminals have always been early adopters of the latest technologies and AI is no different. As this report reveals, it is already being used for password guessing, CAPTCHA cracking and voice cloning, and many more malicious innovations are on the way.”

The use of AI for cybercrime

The first part of the report presents different AI-based methods already employed by cybercriminals.

Malware

The AI-supported or enhanced cyberattack techniques that have been studied demonstrate that criminals are already taking steps to expand the use of AI. However, malware developers can use AI in more obscure ways without being detected by researchers and analysts.

For example, in 2015, a study proved that a system could create email messages in order to bypass spam filters. This approach uses a generative grammar capable of creating a large set of phishing emails with a high degree of semantic quality to scramble filter inputs, so the system can adapt to filters and identify content that is no longer detected.

In 2017, at Black Hat USA, an information security conference, researchers demonstrated how to use ML techniques to analyze years of data related to BEC (Business Email Compromise) attacks, a form of cybercrime using email fraud to defraud organizations to identify potential attack targets.

This system leverages both leaked data and freely available social media information and can accurately predict whether an attack will be successful.

AI Supported Password Hacking

Cybercriminals use ML to improve algorithms for guessing user passwords. Approaches such as HashCat and John the Ripper compare different hash variants of frequently used passwords to successfully identify the password that matches the hash.

By leveraging neural networks and generative adversarial networks, cybercriminals are able to analyze large password datasets and generate statistically distribution-appropriate, more accurate and targeted password variations. .

The report’s authors thus discovered, in an article listing a collection of open-source hacking tools, an AI-based software that can analyze a large set of passwords recovered from data leaks. This software improves its ability to guess passwords by training a GAN to learn how people tend to change and update passwords, most often adding a letter or number.

They also found, on an underground forum post in February 2020, a GitHub repository that has a password analysis tool capable of analyzing 1.4 billion credentials and generating password rules. password variation.

Break CAPTCHAs with AI

The application of ML to break CAPTCHA security systems is frequently discussed on criminal forums. CAPTCHA images are commonly used on websites to thwart criminals when they attempt to automate attacks, among other things (some attempts involve creating new accounts).

According to the report, software that implements neural networks to solve CAPTCHAs is being tested on criminal forums.

Social engineering and AI

Recognized as one of the greatest threats to corporate security today, social engineering allows cybercriminals to gain legitimate and authorized access to confidential information.

The report cites discussions looking at AI-based tools to enhance social engineering tasks found on various underground forums.

According to a Europol report, the recognition tool called “Eagle Eyes » is claimed, on the French Freedom Zone forum, to be able to find all social media accounts associated with a specific profile. It uses facial recognition algorithms to match profiles of a user using different names.

Another tool identified by Europol enables real-time voice cloning: a voice recording of just five seconds from a target allows a malicious actor to clone that voice. A UK-based energy company was duped by one of them and transferred nearly £200,000 to a Hungarian bank account. The cybercriminal had used audio technology fake deep to impersonate the CEO of the company to authorize payments.

Wrong wrong and AI

The wrong wrong involve the use of AI techniques to create or manipulate audio and visual content to appear authentic. Combination of deep learning » and of fake media »them wrong wrong are used in particular for disinformation campaigns because they are difficult to immediately differentiate from legitimate content, even with the use of technological solutions. Due to the widespread use of the internet and social media, wrong wrong can reach millions of people in different parts of the world very quickly.

In the case of fake videos, AI makes it possible to replace one person’s face in a sequence with another using numerous photos. Lots of people get fooled.

Last May, a fake deep was broadcast on YouTube using the face of Elon Musk, to defraud people who thus sent Bitcoin and Ethereum cryptocurrencies to cybercriminals.

Future Uses of AI and ML for Cybercrime

The report’s authors expect to see cybercriminals exploit AI in a variety of ways in the future in an effort to improve the scope and scale of their attacks, evade detection, and use intelligence. AI as both an attack vector and an attack surface.

They anticipate that they will attack organizations via social engineering tactics. Cybercriminals can automate the early stages of an attack through content generation, improve business intelligence collection, and accelerate the rate of detection at which potential victims and business processes are compromised. This will enable faster and more accurate business fraud through various attacks including phishing and business email (BEC) scams.

AI can also be misused to manipulate cryptocurrency trading practices. The authors refer to a discussion on a forum talking about AI-powered robots being trained on successful trading strategies from historical data to develop better predictions and trades.

Furthermore, AI could be used to harm or inflict physical harm on individuals in the future. The authors report that AI-powered facial recognition drones carrying one gram of explosive are currently in development. These drones, which are designed to look like small birds or insects to appear inconspicuous, can be used for micro-targeted or single-person bombing and can be operated via cellular internet.”

AI and ML technologies have many positive use cases, however, these technologies are also used for criminal and malicious purposes. There is therefore an urgent need to understand the capabilities, scenarios and attack vectors that demonstrate how these technologies are being leveraged to be better prepared to protect systems, devices and the general public from advanced attacks and abuse.

The three organizations make several recommendations to conclude the report:

  • Harnessing the potential of AI technology as a crime-fighting tool to future-proof the cybersecurity and law enforcement industry
  • Pursue research to stimulate the development of defensive technologies
  • Promote and develop secure AI design frameworks
  • Defusing politically charged rhetoric about the use of AI for cybersecurity purposes
  • Leverage public-private partnerships and establish multidisciplinary expert panels

For more information : Malicious uses and abuse of artificial intelligence

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *