6 min read

Deepfake AI Candidates Are Here - Let's Talk About It 🎭

Deepfake AI Candidates Are Here - Let's Talk About It 🎭

The Deepfake AI Candidate era has begun.

For those of you who have recruited specifically in tech, and on a global scale, this is not necessarily new to you.

The infiltration of fake profiles and scammer activity within the hiring process has been on the rise for many years now.

In fact, Gartner predicts that by 2028 one in four global candidates will be fake.

But we are now starting to reach new levels of fraud, thanks in large part to major advancements in AI technology.

And these are not just minor inconveniences for those in Talent Acquisition; increased fraud means increased risk to the business.

As true advisors to our organization's, it's our job to address these challenges with legitimate, long-term solutions.

So today, we're going to do just that.


"With some of the world's most sophisticated AI technologies now accessible to cybercriminals, we're entering a new era of digital fraudβ€”a reality we must be better equipped to handle." 

πŸ—£οΈ Deepfake Candidates, Explained

A deepfake candidate is an individual who uses AI-generated tools β€” fake photos, videos, voices, and even credentials β€” to create the illusion of a real, qualified professional.

In some cases, these candidates are:

  • Using manufactured resumes with real-sounding education and work history.
  • Participating in interviews via deepfake videos or manipulated voice software.
  • Passing background checks by leveraging stolen identities or forged documents.

And it’s not always about an individual trying to land a job.

In several cases, state-sponsored moles β€” like North Korean groups cited in the Unit 42 report β€” have faked tech worker profiles to gain insider access to Western companies’ intellectual property, infrastructure, and systems.

Imagine: you think you're hiring a remote software engineer. Instead, you may be unknowingly giving a bad actor the keys to your kingdom.

The Emergence Of This Unique Phenomenon

Three main forces are converging in the market today, causing these nefarious behaviors and practices to take root:

  1. Advances in Generative AI
    Advanced AI tools (Sora, D-ID, ElevenLabs) now allow anyone with moderate technical skills to create convincing deepfake videos, voices, and identities at scale.
  2. Explosion of Remote Work
    Remote and hybrid models mean candidates are often hired virtually. Video interviews and asynchronous assessments are par for the course β€” and these can be faked or outsourced.
  3. Global Economic Pressures and Cyber Warfare
    State-sponsored groups see Western tech companies as high-value targets. Instead of hacking in from the outside, why not just get hired and walk through the front door? These are not conspiracy theories; they are real threats.

Real Threats in the Wild

In the cases that have been documented thus far, threat actors have used a similar set of tactics to try to deceive:

  • Using AI to generate fake profiles across hiring platforms.
  • Deepfaking video calls to pass interviews.
  • Creating entire LinkedIn and GitHub portfolios from scratch.
  • Masking their true locations and identities using VPNs and other obfuscation tactics.

These aren't sloppy attempts β€” they're sophisticated operations designed to slip past overworked, high-volume recruiting teams.

Implications for Talent Acquisition Professionals

This should be a wake-up call for all Recruiters, founders, and hiring managers.

We need to recognize that in most cases, our current hiring processes are not built to detect fake candidates.

This creates massive risks to our business:

  • Data breaches from insider threats.
  • Loss of IP to foreign entities.
  • Regulatory violations (especially in sectors like defense, healthcare, and finance).
  • Brand reputation damage if a deepfake hire goes public with sensitive information.

This is important, and we need to adapt...fast.


🚩 How We Can Tackle the Threat of Deepfakes

Neutralizing this threat must become priority number one for teams that are hiring globally. Here’s a practical playbook that most modern hiring teams can follow:

1. Revamp Your Identity Verification Processes

  • Implement multi-factor ID verification at the offer stage, using government-issued IDs validated by reputable vendors (ex. Persona, Jumio).
  • Consider adding live biometric checks (face match with liveness detection) before onboarding remote employees. Daon is an example of a tool that can do this remotely.

2. Add Video Verification Early in the Process

  • Schedule short live video verification calls before or after interviews. Tools like Reality Defender offer real-time video deepfake detection.
  • Look for inconsistencies: lagging lip movement, unnatural blinking, "glitches" in backgrounds.
  • Train your recruitment team to recognize signs of deepfakes β€” overly smooth skin, robotic-sounding voice, or mismatch between lighting on face and background are all tell-tale signs of a synthetic identity.

3. Use IP Verification and Geolocation Checks

  • Incorporate IP address verification during key stages of the hiring process (ex. before the first screening interview).
  • Flag anomalies or red flags, like IP addresses that don't match the candidate's stated location β€” especially when they appear routed through suspicious VPNs or anonymizers.
  • Some applicant tracking systems (ATS) and video interview platforms now offer IP logging and geolocation alerts as built-in security features. (ex. SmartRecruiters)

IP verification isn’t foolproof, but it does add another valuable layer of detection β€” especially when combined with other security measures.

4. Leverage Technical Screening

For technical roles:

  • Use live coding assessments on platforms like CoderPad, HackerRank, or CodeSignal.
  • Many of these platforms provide various fraud-detection mechanisms.
  • Avoid relying solely on pre-completed projects or GitHub portfolios, which could be fabricated.

5. Implement Secure Reference Checks

  • Speak directly to references over the phone or via secure video β€” avoid email-only verifications.
  • Verify reference identities separately (ex. check LinkedIn profiles, call companies directly).

6. Educate Hiring Teams

Host training sessions so that all recruiters interviewers, and HR team members are aligned and educated about:

  • What deepfake candidates are
  • The warning signs and what to watch for
  • Best practices in addressing deepfakes
  • The escalation path if something feels suspicious

Awareness is often the first and often most effective line of defense.

Deepfake hiring attempts are a full-organizational threat.

  • Set up direct communication channels with your security teams.
  • Create incident response protocols for suspicious candidates.
  • Stay updated on regional employment law compliance (especially regarding verification and anti-fraud measures).

πŸ’­ Final Thoughts: Trust, but Verify

As recruiters, we tend to believe the best in people. It's what keeps us going despite the ups and downs that hiring brings. It's a strength β€” but in today's world, trust must be verified.

Even if a candidate seems perfect on paper and "feels right" in the interview, take time to pause and verify. A few extra minutes of due diligence could save your company millions β€” and protect your reputation and job.

Prepare Accordingly

The rise of deepfake candidates is a wake-up call. It’s not about becoming paranoid; it's about becoming proactive.

Just as we adapted to remote work, virtual interviewing, global talent sourcing, and AI recruitment tools, we must now evolve again β€” building hiring processes that protect our companies while still delivering an exceptional candidate experience.

As we've talked about in previous editions, this new era of Talent Acquisition is requiring a higher level of technical sophistication and awareness.

We must start fortifying our recruitment processes and building systems that can operate effectively in a world where AI-based threats exist.


Want to learn more about this topic? Here are some resources to get started.

πŸŽ₯ Interpret Video and Deep Fakes

TLDR: This quick video will walk you through how to detect suspect behavior in videos and identify potential fraudulent activity.

πŸ“š FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions

TLDR:   In FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions, cybersecurity and deception expert Perry Carpenter unveils the hidden dangers of generative artificial intelligence, showing you how to use these technologies safely while protecting yourself and others from cyber scams and threats. This book provides a crucial understanding of the potential risks associated with generative AI, like ChatGPT, Claude, and Gemini, offering effective strategies to avoid falling victim to their more sinister uses.


Did you enjoy this edition? You can support us by leaving a review!

If you'd like to see a specific topic covered in this newsletter, you can submit your request directly.