ADVERTISEMENT

Identity Verification Solutions To Be Less Reliable Due To AI-Generated Deepfakes: Gartner

Organisations may begin to question the reliability of identity verification and authentication solutions, Gartner's Akif Khan said.

<div class="paragraphs"><p>(Source: Freepik)</p></div>
(Source: Freepik)

Artificial intelligence-generated deepfakes on face biometrics will make 30% of organisations distrust identity verification and authentication technologies by 2026, according to research and consulting firm Gartner.

Currently, identity verification and authentication processes using face biometrics rely on presentation attack detection to assess the user’s liveness. According to Gartner experts, current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using AI-generated deepfakes that can now be created.

“In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,” said Akif Khan, vice president, analyst at Gartner.

“Organisations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake,” Khan said.

Presentation attacks are the most common attack vector, but injection attacks increased 200% in 2023, Gartner research said. Preventing such attacks will require a combination of PAD, injection attack detection and image inspection.

For organisations to stay protected against AI-generated deepfakes beyond face biometrics, chief information security officers and risk management leaders must choose vendors who can demonstrate they have the capabilities beyond current standards and are monitoring, classifying and quantifying these new types of attacks, Gartner said.

“Organisations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection,” said Khan.

Once the strategy is defined and the baseline is set, CISOs and risk management leaders must include additional risk and recognition signals, such as device identification and behavioural analytics, to increase the chances of detecting attacks on their identity verification processes.

According to Gartner, security and risk management leaders should also take steps to mitigate the risks of AI-driven deepfake attacks, by selecting technology that can prove genuine human presence and by implementing additional measures to prevent account takeover.