Criminals leverage AI for malicious use

Cybercriminals will leverage AI both as an attack vector and an attack surface.

A new report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro looking into current and predicted criminal uses of artificial intelligence (AI) concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. 

Deep fakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support:

  • convincing social engineering attacks at scale;
  • document-scraping malware to make attacks more efficient;
  • evasion of image recognition and voice biometrics;
  • ransomware attacks, through intelligent targeting and evasion;
  • data pollution, by identifying blind spots in detection rules.

“As AI applications start to make a major real-world impact, it’s becoming clear that this will be a fundamental technology for our future,” said Irakli Beridze, Head of the Centre for AI and Robotics at UNICRI. “However, just as the benefits to society of AI are very real, so is the threat of malicious use.” 

The paper also warns that AI systems are being developed to enhance the effectiveness of malware and to disrupt anti-malware and facial recognition systems.

“Cybercriminals have always been early adopters of the latest technology and AI is no different. As this report reveals, it is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works,” said Martin Roesler, head of forward-looking threat research at Trend Micro. 

The three organizations make several recommendations:

  • Harness the potential of AI technology as a crime-fighting tool to future-proof the cybersecurity industry and policing;
  • Continue research to stimulate the development of defensive technology;
  • Promote and develop secure AI design frameworks;
  • De-escalate politically loaded rhetoric on the use of AI for cybersecurity purposes;
  • Leverage public-private partnerships and establish multidisciplinary expert groups.
Editor

Editor

A team of dedicated journalists whose mission is to advocate for ethics and transparency in the maritime industry.

Like this article?

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on email

Donate to Maritime Fairtrade

Your support helps sustain our extraordinary level of research and publication, enabling millions of readers to learn more about the maritime industry and make informed decisions. Thank you for your support.

This is a secure webpage.
We do not store your credit card information.

Related STORIES