Malicious AI
Deepfakes Beyond Politics: How AI Could Be Used to Wreak Havoc in Business
Deepfakes are rapidly emerging as the most talked about threat to democracy, but beyond malicious depictions of politicians they also pose a significant threat to the world of business. Lucy Ingham speaks to Galina Alperovich, senior machine learning researcher at Avast, to find out where the technology is now, how it could progress and whether there is any prospect of it being stopped
Galina Alperovich, senior machine learning researcher, Avast
As we see politics in much of the West heat up, concerns about how technology is used to manipulate public opinion continues to grow. One of the biggest areas of concern is deepfakes – and as artificial intelligence advances this is set to become an ever greater issue for democracy.
Comprising of manipulated multimedia content created using artificial intelligence (AI), deepfakes enable malicious actors to create entirely fake – yet utterly believable – footage of high-profile figures.
In the political world, this poses a very genuine threat, creating the potential for videos depicting politicians saying things that not only would they never actually say, but which could completely destroy their political careers.
“The tech is quite new, and the community still doesn’t know exactly how it can be effectively used and misused. However, with strong evidence of the power of social media to provoke discussion, set news agendas and even sway voters, it is natural to expect it to be misused against politicians,” explains Galina Alperovich, senior machine learning researcher at Avast.
“Deepfakes are not common… yet. But I believe they will become increasingly more prevalent over the next year or so.”
However, the technology also poses a very real threat to business. And despite this rarely being discussed, it may prove at least as damaging in the long run.
The current capabilities of deepfakes
At present, examples of deepfake technology ‘in the wild’ remain relatively rare.
A recent notable exception is a video of Facebook CEO Mark Zuckerberg where he appears to boast about stealing users’ data, although this was created by artists Bill Posters and Daniel Howe in collaboration with advertising company Canny.
It was uploaded to Facebook-owned sites to test the company’s policy on deepfakes, following the emergence of a video of US politician Nancy Pelosi that had been edited to make her speech appear slurred. Although the Pelosi video is not a true deepfake as it did not require AI, it nevertheless demonstrated the potential damage a manipulated video of a politician can do.
However, the very nature of the technology means it could be used to impersonate anyone that has already been recorded on video – or even to depict someone, real or unreal, for which no footage exists.
“Typically, deepfake creators replace entire faces with others, or, more generally, two objects, displayed in video format. They can also make small adjustments such as changing lip movements to sync with faked audio tracks,” explains Alperovich.
“If you’re in your second year of university studying AI and image manipulation, you could do it.”
“The most famous example of this is the forged Obama video published in 2017. The technology can also generate authentic-looking faces and objects that have never existed before.”
While video is the most widely discussed form of deepfake, audio is at a more mature stage of development.
“Voice-only fakes are easier to produce than video versions. Glitches in the algorithms, which in video can appear as artefacts interfering with the background scene, are readily dismissed by human ears as ‘just noise’ in a voice recording,” she says.
At present, there are limitations on what can be achieved with video – although current results are still enough to fool the typical social media user.
“Right now, it is quite easy to create a fake video of reasonable quality. There is software available and even dedicated websites that can add any face to an existing video,” says Alperovich.
“It can be done on a regular laptop in an afternoon, for example. If you’re in your second year of university studying AI and image manipulation, you could do it.
“Given how quickly machine learning is progressing, it is not inconceivable to think that in the future high-quality videos of people and content could be generated.”
Potential beyond politics: Deepfakes in business
While the potential political damage of deepfakes is easy to see, there are serious wider potential risks both to businesses and the economy.
For high-profile businesses, for example, a video of a member of the C-suite saying something controversial of critical of their company could be devastating.
“A well-made, compromising video can be dangerous and cause serious reputational damage ,” says Alperovich.
“It could also cause material damage: stock prices are very sensitive to things like this and markets can react very quickly. The technology is new, and there could be a number of subtle ways it could be misused.”
There is also the potential for deepfakes to be used by cybercriminals to target businesses.
“Globally, deepfakes could do harm. An attacker could gain a lot of money from stock market price movements.”
“Since non-existent faces can be created, fake profiles that support imagery and video can be built,” she says.
“I recently saw an example where someone was using a photo of a non-existent person on a LinkedIn profile for malicious purposes. Using a fake face is useful to criminals as it eliminates the risk of a reverse image search identifying the profile as using someone else’s details.”
The technology could even be harnessed to abuse the stock market – with potentially global ramifications.
“Globally, deepfakes could do harm. An attacker could gain a lot of money from stock market price movements,” she adds.
“It’s already illegal to manipulate stocks, so one could say that all trading after a particular event should be cancelled. The question is how fast you are able to identify and react to such an attack, and how easy the wider impact is to undo.”
Why deepfakes are still rare
With so much potential for damage, why is it that deepfakes are still rare? The answer, according to Alperovich, is cost.
“One of the reasons we are not currently seeing more deepfakes is simple economics,” she says.
“Most cybercrime is a business. If you are a dedicated attacker who has funds to invest, there are cheaper and simpler means to get the same effect. Even for criminals, it makes business sense to take the cheaper route.”
However, deepfake technology has by no means reached maturity, and with advances comes a lowering of costs.
“It’s certain to progress in the next few years. There’s a lot of research going on in this area among the AI community,” she explains.
“Anyone could be chosen as a target for this in the future.”
“Many things are already possible, but the current question mark is cost. The history of computing suggests that these things will become less and less expensive in the near future.”
This means that while for most of the issue of deepfakes is unlikely to pose a personal or enterprise-level threat, this could change in the future.
“This technology is still relatively exclusive and new, and it's not cost-effective to create high-quality fake videos with arbitrary content. It still requires time, money and skills.
“Therefore, it's unlikely that most of us will be considered valuable targets, so right now people shouldn’t worry about it too much.
“That being said, the machine learning community is rapidly developing its algorithms, so this could change. The general advice from us is to be more responsible for, and cautious with, your devices, data and privacy. Anyone could be chosen as a target for this in the future.”
Can deepfakes be stopped?
With such significant potential future damage, the question of stopping deepfakes is something of an urgent one. Unfortunately, the answer remains some way off, both in terms of recognising a deepfake video and doing something about it.
“At this stage of deepfake development there is no general solution for detection. The problem is that if somebody releases a system for checking the authenticity of a video, this same system could be used by the attacker to tune its own algorithm. AI is a general-purpose technology. It can be used by both the good and the bad guys,” she explains.
“In general, there’s no technical solution that can prevent deepfakes. A future possibility could be certified content, which is similar to what we can see in our computer software, where digital signatures are used to prove software is genuine. Content could be signed to show it came from a legitimate company or source.
“Big players could digitally sign their content, but that’s not going to happen soon. It’s a ‘chicken and egg’ problem – social browsers, devices and online news publishers have to support signed content as well before it can be effective.”
“Big players could digitally sign their content, but that’s not going to happen soon.”
For major businesses, the advice is to be cautious.
“The big players should definitely be aware – and conscious of the impact. They should try to track online conversations so they can act to ensure deepfakes are responded to before they affect the organisation,” explains Alperovich.
However, she also believes that those involved in the dissemination of such footage need to accept responsibility for stopping it.
“In my opinion, the biggest responsibility is on the large platforms, big internet players and information distributors, mainly because they have centralised access to the data. This data can be used for the creation of machine learning-based tools that could combat the spread of deepfakes and fake news,” she says.
“Without easy access to data like this, it’s very difficult and sometimes impossible to develop the necessary tools for detection. Big internet companies have the required human resources; they have the best AI specialists, and they have computational resources for developing and running the tools.”