Posted inSecurity

Deepfakes: Can we trust anything online, anymore?

The Tom Cruise deepfake TikTok video set off alarm bells across the globe. As deepfake technology continues to develop, we sat down with Cybereason’s Yossi Naar to explore the impact this can have on society and what needs to be done to identify them

Microsoft backs self-driving carmaker Cruise, chips in for $2 billion investment
Microsoft backs self-driving carmaker Cruise, chips in for $2 billion investment

Although AI is fuelling the development of a more dynamic world, the debate over its potential misuse continues. The root of this debate lies in the fact that the very things that make artificial intelligence and machine learning integral to businesses are the very features that cybercriminals misuse and abuse for their benefit.

Deepfakes are one example of how cybercriminals are exploiting AI techniques to manipulate audio and visual content to appear authentic to the untrained eye. Simply put, it means creating AI-generated videos which can make people appear to do and say anything.

Ever since the first low-quality deepfakes emerged few years ago, they have been viewed as a threat with considerable implications across culture, geopolitics and security.

Remember the deepfake TikTok Tom Cruise video that went viral in February this year?

The videos are a testament to the ability of this technology to produce lifelike fakes. Even commercially available deepfake-detection technology cleared the clips as authentic.

In this instance, the videos were not meant to cause any harm to the Hollywood superstar. However, there could be dire consequences of such a deepfake video going viral when they are made with the intent to harm.

Deepfakes and our ability to ensure they cannot be utilised maliciously, are definitely a subject worthy of further discussion before the technology advances to the point where we can’t put the genie back in the bottle.

@deeptomcruise

Sports!

♬ original sound – Tom

As this malicious technique continues to improve, Yossi Naar, chief visionary officer and co-founder of Cybereason explains why it is time to renew our efforts to identify deepfakes before we are faced with considerable implications across culture, geopolitics, and security.

Yossi Naar, Cybereason

Yossi Naar, chief visionary officer and co-founder of Cybereason.

What are the security implications for deepfakes? (For both businesses and governments)

Right now, generative text technology is reserved to specific people OpenAI granted access to. They blocked the last generation from public release due to fear of abuse. So – for the moment that channel is not a big threat. Regarding video deepfakes – realtime videos are still far away and the regular ones have a visible “artificial” quality to them.

How can security teams correctly identify deepfakes? And how can they empower employees against deepfake attacks?

The best weapon we humans have is the “uncanny valley” (a concept first introduced in the 1970’s) when we learn to believe this uneasy feeling that what we’re looking at isn’t really a human but somewhat alien. This is the key to unraveling fakes – they feel wrong. The same is true for the older (but not gpt3) generation of generative text systems- the text usually does not make sense. It will appear somewhat nonsensical.

What would regulation surrounding deepfakes look like?

Deepfakes are a technological tool with massive potential implications on culture, politics, security. But historically, advances of this type have not fared well in regulation. The cat may already be out of the bag where deepfakes are concerned. It may be that legitimate uses can be controlled, that access can be limited to reduce availability to some degree – but I am not sure it will affect malicious use.

What technologies are available, currently and potentially, to identity deepfakes? 

Today, the tools are being built because no good ones exist to my knowledge. Luckily, current generation video deep fakes are not good enough to be a big concern. What concerns me more are generative text fakes because it is not clear that a strategy exists to detect them – because they are based on a deep and massive knowledge base. They are getting very close to human authorship.

The detection of fake generated data is based on the ability to identify repeated patterns due to the algorithms used. In the case of the GPT-3 (autoregressive language model that uses deep learning to produce human-like text), the learning machine leans on a massive collection of human generated text. It is the same tool one would use to attempt to classify a “true” text vs a “fake” one. Today, it is not clear that this would be possible without the operators (OpenAI in this case) leaving an intentional signature. Of course, it is possible that once its use increases it will be less powerful than it currently appears to be.