Cybersecurity
AI vs AI: Artificial Intelligence and the DDoS Attack
The cybersecurity industry is turning to artificial intelligence and machine learning to defend against cyber threats. Cybercriminals are adding it to their arsenal too. Luke Christou sat down with Marc Wilczek, chief operating officer for Link11, to discuss how the company uses AI to defend its customers against increasingly complex DDoS attacks
More than two decades on since distributed denial of service (DDoS) attacks first rose to prominence, the age-old cybersecurity threat appears to have reared its head again.
DDoS attacks involve deliberately flooding a server with more traffic than it can handle. This is done in an attempt to crash the system and render its services inaccessible by legitimate users. Despite a recent period of decline, these attacks are becoming more frequent, more complex, and more difficult to defend against. Global organisations such as Blizzard, Telegram and Wikipedia have all been targeted in 2019.
“DDoS attacks have been out there for a long period of time, but it’s almost like their renaissance at the moment,” explains Marc Wilczek, chief operating officer for Link11.
Cybersecurity firm Kaspersky found DDoS attacks increased by 84% in the first quarter of 2019 compared to the last quarter of 2018, and the threat has only continued to develop this year. Link11’s H1 2019 Distributed Denial of Service Report shows that average attack bandwidth almost doubled from 3.8 gigabytes per second (gbps) to 6.6 gbps between Q1 and Q2. Likewise, the percentage of complex attacks using multiple attack vectors increased from 47% to 62% throughout the first half of the year.
These figures are owed to the increased ease of procuring these services, which are widely offered on the dark web for relatively small sums of money. Transactions on these underground websites are usually carried out using largely anonymous cryptocurrencies, making it difficult for law enforcement to step in.
The internet of things (IoT) is also an issue, given the millions of devices that are left unsecured using default username and password combinations. Malicious actors scan for these devices, breach them, and add them to botnets – the term for a large number of compromised devices which are used to carry out cyberattacks.
This resurgence of the DDoS threat comes at a time when protecting IT infrastructure is more important than ever.
“Unlike in the past, these days revenues are produced through digital services,” Wilczek explained. “If they are shut down, disrupted, this can cause major financial damage.”
Using artificial intelligence to fight the DDoS threat
Numerous cybersecurity vendors have developed solutions to help mitigate the impact of DDoS attacks. Link11’s particular solution aims to do so by deploying artificial intelligence, machine learning and automation to thwart the attempts of cybercriminals.
However, Wilczek stressed that Link11 didn’t turn to AI because it turns heads, but because it provided a genuine benefit to its customers, notably in the time, or lack of, that it takes to mitigate a threat. By utilising AI technology, Link11 is able to offer its customers a guarantee – 10 seconds to detect, index and block a threat, or your money back.
“You see so many vendors talking about AI. Everyone loves it. But unless you translate that into user benefits and economic benefits, what’s the point of using it?” Wilczek questions.
Link11’s AI and machine learning technology is predominantly used for two purposes. Firstly, to monitor the traffic of each client in order to spot anomalies. The image below demonstrates the kind of activity that would be flagged by Link11’s AI defences. Initially, the vast majority of the site’s traffic originates from Germany (IP range 59). However, there is a sudden and drastic spike in connections originating from Vietnam (IP range 242).
The system looks beyond traffic origin, using metrics such as the speed at which these connections appear and the amount of bandwidth involved to determine whether a response is needed.
“As soon as we start seeing these abnormalities, combined with latency going down, we start cutting off these requests to a point where the system behaves normally again,” Wilczek says. “All of this is being performed through automation and AI.”
Link11’s algorithm also uses behavioural analytics to build a profile on each of its client’s users. When a new user visits the website, they are awarded “good points or bad points” depending on how their behaviour mirrors the average user.
Depending on the amount of bad points being collected, the algorithm will determine the best course of action to take. Its response can range from issuing a captcha to verify that the user is legitimate, to cutting off the connection entirely.
New clients start off with Link11’s default algorithm based on all of its different clients, but with time each algorithm builds up its own profile of a typical user. This process never ends, with the algorithm continuing to learn and refine its response over time.
Addressing AI-based cybersecurity concerns
Concerns have been raised about the use of AI to address the ever-changing threat landscape. It is feared that threat actors could target cybersecurity companies, or the training data they use, in order to manipulate or overcome an algorithm.
Typically, security products will sort code into ‘good’ and ‘bad’ categories. However, there are already examples of malware – known as polymorphic – that changes its code in order to avoid detection by cybersecurity software designed to automatically detect whether code is good or bad.
“We thought about this extremely carefully, and this is why we’ve taken a very different approach,” Wilczek explains.
“Most vendors, if not all, exclusively blacklist. They detect something new, they blacklist it. If you apply that approach, you end up with that very nasty cat and mouse game.”
“If you look at the pattern here, most vendors, if not all, exclusively blacklist. They detect something new, they blacklist it. If you apply that approach, you end up with that very nasty cat and mouse game.”
To avoid this, Link11 uses whitelisting, essentially treating traffic as guilty until proven innocent.
“That means that even if there are new threat vectors coming in, nevermind, no problem, because it’s not whitelisted. Everything is being checked before,” Wilczek says.
But is AI-based cybersecurity really necessary? Regardless of the answer, there might not be an alternative in a few year’s time, given that 99% of organisations are already struggling to manage all of their cybersecurity needs.
“Trying to throw people at the issue isn’t going to get anyone far,” he explains. “Wages are going up, there is massive demand, there is very little supply. The numbers are only going to get worse over the next couple of years.”
The AI threat
The cybersecurity industry isn’t the only side using the development of new technologies to their advantage. Cybercriminals are increasingly turning to AI to develop new threats, unearth new attack vectors and increase the severity of their current threats.
According to Link11’s H1 2019 report, a significant amount of DDoS attacks are now carried out by abusing cloud services offered by the likes of Amazon Web Services, Microsoft Azure, Google Cloud and others. At least one third of all DDoS attacks were launched using cloud services in the first half of the year, and that figure climbed as high as 48% in April.
“And guess what… there was no one keying in anything,” Wilczek says. “They’ve prepared malicious VMs [virtual machines]. It’s all coded. They use machine learning mechanisms in order to create these fake cloud accounts in the first place.”
“Rather than spending any time as a human, why would you bother if you can let an algorithm deal with it and figure out the greatest weakness to infiltrate that network?”
Much like Link11 uses AI to detect new threats, there is also potential for the technology to be used to find new vulnerabilities to exploit.
“Rather than spending any time as a human, why would you bother if you can let an algorithm deal with it and figure out the greatest weakness to infiltrate that network? There is nothing more convenient than that.”
“AI is going to be weaponised increasingly more in order to exploit weaknesses and vulnerabilities,” he says. “I definitely see that happening.”
Winning the AI cyberwar
Defending against cyber threats often feels like a losing battle. The implementation of regulations like the General Data Protection Regulation (GDPR) has helped to educate businesses on good cybersecurity practices, and they’re spending increasingly more on their defences. Yet, attacks continue to increase both in frequency and severity.
However, Wilczek is “pretty confident” that good will win out eventually. AI will play its part in overcoming DDoS attacks among other cyber threats, but it will take bringing together all of the “right mechanisms and tools” for the right side to prevail.
“If it was up to me, the good will always win, but it’s not going to fall out of the sky,” Wilczek says. “AI certainly plays a role, there’s no question about it as far as I’m concerned.”
However, there is “no such thing as a silver bullet that solves all problems under the sun”.
“It’s orchestrating the whole of security across so many different layers, from a tech standpoint, human standpoint, physical standpoint, and so on. Putting all that together in an orchestrated manner, I think is what needs to be done.”