The advancement of artificial intelligence (AI) prompts fears of a Terminator-style future where humans live as an underclass to the machines we created. However, humanity may face a far more immediate threat in the form of AI malware.

According to a new report by cybersecurity company Malwarebytes, When artificial intelligence goes awry: Separating science fiction from fact, malicious AI will be here “sooner than people might think”.


AI is developing at a faster pace than expected, and with close to one in ten new startups working on AI technology, the rate of progress is likely to continue.


Some 91% of businesses plan on having live AI initiatives by 2022. Yet, as the enterprise makes use of the technology, cybercriminals are likely to realise its potential too.

AI malware: using technology to enhance the threat

IBM Research recently developed DeepLocker, an AI-powered attack tool that avoids detection until it has infected a specific target. The malware poses as video conferencing software until it has reached its intended target, and only then deploys its malicious payload, making it difficult for security systems to detect.


The tool was developed to provide a better understanding of how AI in its current form could be used to enhance malware. However, there is no evidence to suggest that such AI malware has been deployed by malicious actors yet.


However, according to Malwarebytes, there are some “realistic possibilities” that could be exploited by cybercriminals in the near future.


Worms

AI could potentially allow the development of worms that are capable of learning from each detection event, allowing them to avoid the behaviour that led to others’ discovery. For example, a worm could update its code to avoid being blacklisted by security software, or add randomness to its movement to evade pattern-matching rules.


Trojans

While there are already malware variants that are able to change and alter the files that they pose as in order to avoid detection, AI could possibly be used to improve this method and ensure that malicious files go undetected by security software.

The threat posed by deepfakes in phishing campaigns

Aside from embedding AI technology inside of malware, there is also potential for cybercriminals to use AI in order to gain access to a target’s system and spread malicious files.


We’re already beginning to see some of the negative consequences of AI with the spread of deepfakes. These altered videos, made possible by advancements in machine learning, are almost impossible to distinguish from the real thing. Two neural networks are used – one which superimposes an image or video on top of another, and another which analyses the results and evaluates the quality.


There are already notable examples of these videos being used to spread fake news and disinformation. A faked video of Mark Zuckerberg recently surfaced in which he claims to control “billions of people’s stolen data, all their secrets, their lives, their futures”.

“Deepfakes could be used in incredibly convincing spear phishing attacks that users would be hard-pressed to identify as false.”

With cybercriminals increasingly turning to spear phishing – highly-targeted phishing emails designed to trick the receiver into believing that it has come from a legitimate individual or organisation – rather than mass spam campaigns, Malwarebytes believes that deepfake videos could soon become part of the cybercriminals’ arsenal.


According to Malwarebytes, two thirds of businesses saw an increase in impersonations in the past year, with close to three quarters of those that came under attack having suffered losses as a result. This suggests that businesses are already struggling to deal with this kind of threat, and AI technology could make it even more difficult to detect.


“Deepfakes could be used in incredibly convincing spear phishing attacks that users would be hard-pressed to identify as false,” the report states.


“Now imagine getting a video call from your boss telling you she needs you to wire cash to an account for a business trip that the company will later reimburse.”

Preventing AI-enhanced cybercrime

According to Malwarebytes, we could begin to see early-stage AI malware within the next one to three years.


Within ten years, “we may be left in the dust”, should businesses and organisations fail to take a proactive approach to defending against these threats.


“For the moment, we haven’t seen a fully-automated security strategy that would be able to overpower AI-driven malware,” the report says.


With most governments unprepared for an AI future, much of the responsibility falls on cybersecurity vendors to ensure that they are on an “equal playing field” when these threats do materialise. For vendors, building systems that can correctly identify these threats should be a top priority.

Share this article