Posted inEmergent Tech

Report predicts, by 2026, attacks using AI-generated deepfakes on face biometrics

Due to Deepfakes close to 30 per cent of enterprises will no longer consider identity verification and authentication solutions reliable by 2026

By 2026, close to 30 per cent of enterprises will no longer consider face biometrics and other such identity verification and authentication solutions reliable in isolation due to Deepfakes. According to Gartner VP Analyst Akif Khan, the past decade has seen several inflection points in the fields of AI, which in turn allow for the creation of synthetic images.

These images of real people’s faces, also known as deepfakes can be used by malicious players to undermine biometric authentication or even render it inefficient, according to Khan.

“As a result, organizations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake,” stated Khan.

Deepfakes in PAD mechanisms

Most of the identification verifications and authentication process that use face biometrics rely on the presentation attack detection (PAD) to assess the user’s liveness. Khan added, the current testing processes and standards to define the PAD mechanisms do not cover digital injection attacks using AI-generated deepfakes.

A research conducted by Gartner stated, while presentation attacks are the most common attack vector even today, there has been a massive jump of 200 per cent in 2023. Thus, preventing these attacks would require a combination of PAD, and Injection Attack Detection (IAD), along with image inspection.

“Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” said Khan.

Combine IAD and Image Inspection Tools

Khan stated organisations should now start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection.

Gartner’s research stated the chief information security officers (CISOs) and risk management leaders must choose vendors who can demonstrate they have the capabilities and a plan that goes beyond current standards and are monitoring, classifying and quantifying these new types of attacks.

The CISOs and risk management leaders must include additional risk and recognition signals, such as device identification and behavioral analytics, to increase the chances of detecting attacks on their identity verification processes.

There is a need to take steps to mitigate the risks of AI-driven deepfake attacks by selecting technology that can prove genuine human presence and by implementing additional measures to prevent account takeover.