top of page

Lab Blog

Deepfakes and Biometrics: Bypassing Security with Synthetic Identities

  • dbacic47
  • May 12
  • 4 min read

Author: Anel Zukanovic


If you haven’t seen it, deepfakes have gone from funny and intriguing to threatening security for many individuals and establishments over the past few years. Examples on social media show videos of celebrities or politicians saying things that they never actually said. Even though they can be entertaining, deepfakes are starting to make a not-so-positive name for themselves in the world of biometric security systems. In this blog, I will be talking about what deepfakes really are, how they are becoming a challenge to biometric systems, and the real-world problems they bring along.


What is a Deepfake?

So, what exactly is a deepfake? Deepfakes are realistic digital fakes generated by advanced artificial intelligence technology. They usually involve the manipulation of audio or video content to display someone doing or saying things that they never did or said. At the core of the deepfake technology is machine learning, more specifically, Generative Adversarial Networks (GANs). They pretty much have two AI sides that interact with each other. One creates fake images or videos, and the other tries to pick up on it. As the creator side gets better at generating the fake content, the detector side struggles to spot it, making the resulting content exponentially realistic.


In the beginning, deepfakes were very bare-bones and very easy to distinguish from real ones. However, the way they look today is highly different and more advanced. Things like facial movements, skin texture, and sometimes even voice tones and levels are copied so successfully that even advanced software struggles to tell the difference between real and fake, along with humans.


Biometric Approach to Deepfakes

Biometrics have become a part of our daily digital lives, even if we are sometimes totally unaware of it. Unlocking face ID to open your phone, signing into apps using fingerprints, or using voice verification all include biometric technologies that are prone to exploitation. However, the interesting thing about it is that biometrics are built around the uniqueness of the user. Things like fingerprints, facial structures, voice, and eyes are all traits specific to an individual, making them highly secure methods of authentication. This approach for the security model would, in theory, be highly effective if deepfake technologies didn’t start to rise.  The strength of biometric systems depends heavily on their accuracy and efficiency. If biometrics can differentiate one person from another, they will stay secure. If the AI mentioned can convince humans and software that the fake biometric data is real, then the security system in place will fail drastically.


Bypassing Biometric Security

One of the largest worries regarding deepfakes is their potential to go around biometric authentication. For instance, think of a scenario where facial recognition technology is used to secure sensitive information. A malicious user could create a deepfake picture or video that is so realistic that the facial recognition software accepts it as real. Voice recognition systems also face significant challenges. Malicious users can create deepfake audio to mimic someone’s voice perfectly to trick the authentication method in place. Attacks like these are already happening, as in 2019, when criminals used deepfake audio generated by AI to impersonate the current CEO of an energy company in the United Kingdom. They were able to trick an executive into transferring about 220,000 euros to a fraudulent account.

In recent events, deepfakes have been used to infiltrate educational systems. A couple of months ago, several universities in the UK reported cases of students using deepfakes during online admission interviews. Things like appearance and voice were altered to make them seem more qualified for the role they were applying for. This shows how quickly a new technological breakthrough can turn harmful when it is in the wrong hands.


Detection

At this point, you are probably asking what is being done to prevent these things from happening. Researchers and technology experts are actively looking for newer and advanced defenses by the day. One method is improving detection algorithms by specifically adding code that actively searches for differences in data and deepfake technology. These algorithms inspect many elements like blinking, unnatural facial movements, inconsistencies in lighting, and sometimes irregular heart rate visible from skin movements. A second method is liveness checking/detection. This is the technology that requires the user to complete some biometric authentication that needs a living person, rather than a static image or recording. Systems like these greatly increase the complexity needed to attack a system using deepfakes. Another defense being explored is the use of multi-factor authentication. This includes passwords, security tokens, and keystroke patterns that can improve security. So, even if a deepfake is able to fake your identity, it will need to go through a second wall of authentication to get through. Behavioral biometrics like movement and navigation patterns can be used to constantly verify a user’s identity. These can be very difficult to replicate as they can change at any time.


However, technology can’t solve everything. Human observation and pattern recognition are vital when it comes to detecting deepfake threats. A machine does not have the same natural capabilities to spot the difference. Biometric defense can only go so far before reaching the line of personal privacy. Ethical and legal problems arise when it comes to deepfake defense. As these problems blur the lines of consent and privacy, lawmakers around the world are trying to put in place legislation to combat this problem properly, criminalizing fraudulent impersonation and the spread of.


Conclusions

Deepfakes are a continuously evolving threat that requires non-stop innovation, research, and collaboration between industries to stop them effectively. As defense methods improve, so will the malicious users. Biometric data must also continue to improve and evolve, at the same time incorporating strong anti-spoof tech, multi-factor authentication, and advanced detection methods. As the biometric systems become more and more in use by consumers, the stakes will only get higher, making it a bigger problem by the day.


To learn more:

 

References:

  1. Damiani, Jesse. “A Voice Deepfake Was Used to Scam a CEO Out of $243,000.” Forbes, 3 Sept. 2019, https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-aceo-out-of-243000/.

  2. Heaven, Douglas. “What Are Deepfakes?” MIT Technology Review, 21 Nov. 2018, https://www.technologyreview.com/2018/11/21/139264/explainer-what-are-deepfakes/.

  3. Sample, Ian. “UK Universities Using Automated Interviews Face Rise of Deepfake Applicants.” The Guardian, 12 Feb. 2025, https://www.theguardian.com/education/2025/feb/12/uk-universities-automating-interviews-facedeepfake-applicants.

  4. Yang, Jiachen, et al. "Deepfake Detection via Temporal Difference Guided Contrastive Learning." arXiv.org, 3 Oct. 2023, https://arxiv.org/abs/2310.02140.

Comments


bottom of page