Posted inEmergent Tech

Replacing human productivity or a threat to security? The bigger concern of Deepfake and AI  

While GenAI has increasingly made life easier for organisations, the year of 2024 also brings in strong concerns of what threatens organisations. As GenAI continues to evolve, so does newer models of threat and the use of deepfake.

While GenAI has increasingly made life easier for organisations, the year of 2024 also brings in strong concerns of what threatens organisations. As GenAI continues to evolve, so does newer models of threat and the use of deepfake.

Over $25 million, is what cost a multinational firm based out of Hong Kong, after one of its employee’s was tricked by a deepfake video of the company’s Chief Financial Officer. The employee, according to media reports, who worked in the company’s finance department, believed the video call to be real.  

But here is the catch. The email had clues of the potential fraud; it told the employee that the session would be about a secret transaction. The idea is to get people to act fast and transfer funds by stating urgency. While the employee had his doubts, the people on the video looked and sounded just like his colleagues.  

And that just opens several potential risks and challenges that the fast evolving and growing world of Generative AI brings us. In fact, at the World Economic Forum this year, Microsoft Chief Economist Michael Schwarz stated the worry should be more towards AI being used by bad actors, rather than outpacing human productivity.  

Deepfakes can easily be used to create convincing hoax images, videos, sounds, it is a combination of the deep learning concept of AI, along with something fake. There are already several incidents, automated phone messages that sound like a leader, video conferences, with time the list will just grow.  

A report states there is a 10x increase globally in deepfake fraud cases from 2022 to 2023, and in regions MEA stood at a growth rate of whopping 450 per cent.  

A world of digital clones  

Today there is growing affordability and accessibility of Crime-as-a-Service tools especially in video and speech synthesis. The costs are dropping down to below $100, making it more accessible, and bringing out widespread sophisticated attacks.  

There is a significant shift in the threat landscape, with several low-skilled fraudsters and criminals gaining easy access to launch deepfake attacks.  

There are growing true multimodal AI cloning models that are creating digital twins or doppelgangers. These models help create identical clones of individuals in real-time voice and video.  

Reports expect this technology to lead to real-time deepfake video and voice cloning creating almost perfect digital replicas of individuals.  

Image Credit: Saksham Choudhary, Pexels

Biometrics may not be as safe as you think  

According to Gartner VP Analyst Akif Khan, the past decade has already seen several inflection points in the fields of AI, which have allowed for the creation of synthetic images. Gartner predicts by 2026, close to 30 per cent of enterprises will no longer consider face biometrics and other identity verification and authentication solutions reliable, due to deepfakes.  

Khan stated the images of real people’s faces can be used by malicious players to undermine biometric authentication or even render it ineffective.  

“As a result, organisations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake,” stated Khan. 

According to the Onfido Identity Fraud Report 2024, stated while there still is effectiveness of biometric verification in combating fraud, biometrics is currently being adapted by fraudsters in inventive ways. There will be a need to integrate the latest verification technologies and bring in AI and liveness detection.  

The C-suite risk  

There are enough threat actors tricking people into scanning QR codes that lead to downloading malware. A report by researchers at Abnormal Security states the trend is just going to grow.  

“And C-suite executives are 42 times more likely than an ordinary employee to receive QR code attacks. These QR codes normally take people to what look like legitimate websites and get them to enter sensitive personal data,” the report stated.  

Another report by Trustware researchers states that Facebook job ads are used to spread malware and steal data. Malwarebytes’s annual State of Malware report stated the biggest target for ransomware gangs are the IT services, services sector and manufacturing sector.  

Even Canon has issued updates to close several vulnerabilities in some models of multi-function printers and laser printers. The devices that are directly connected to the internet without a router can be easily hacked.  

As Generative AI and security measures grow strong, the deepfake detection for the KYC processes itself are expected to be driven by a combination of behavioural anti-fraud measures, biometric verification methods, liveness checks, machine learning technologies.  

There will be a need to go beyond analysing isolated imagery, audio or video and look at context, cross-referenced information, and environmental factors. As AI evolves, so will deepfake, and so do the security measures.