New York

October 15–17, 2025

Berlin

November 3–4, 2025

Your top job candidate might be a coordinated AI scam 

Scammers have gotten even more crafty in finding new ways to collect a cheque, and AI has been the ultimate accomplice.
April 22, 2025

Estimated reading time: 5 minutes

AI has no shortage of drawbacks, but helping fraudulent characters scam their way into a role is one you should definitely be on high alert for. 

As a CTO who has hired hundreds of technical professionals over the past three decades, nothing prepared me for what I encountered during a recent hiring round.

In early 2025, I needed to fill several key positions quickly and decided to turn to the open market,  uncovering a sophisticated operation. Applicants were using AI to create false resumes and coach candidates through interviews, even involving multiple people to pose as one candidate. These fabricated applications seemed to be in perfect alignment with our needs as a business, but we soon discovered how high the threat to our hiring quality was. 

This pattern is not isolated. I have had conversations with technology executives across sectors who have encountered similar attempts. While the immediate threat is companies paying for inadequate performance, a more sinister risk exists. Malicious actors could access sensitive information, install malware, or extract data. For public companies or government institutions, this presents genuine national security concerns. 

Signs you’re in for an AI interview 

The interview stage is never fun for participants on either side of the table, and, inevitably, there can be some mishaps or awkward moments. But there are certain red flags that you need to look out for

One of the earlier candidates had “camera issues,” so we stayed on audio. Their responses to technical questions were direct and accurate, so I scheduled a team interview but informed them that being on camera was a requirement. During the team call, several members noticed the candidate kept leaning in, reading from their screen. They suspected AI assistance, and I apologized for letting someone like that slip through.

After just a handful of other interviews, I started piecing it together. It wasn’t one thing but a constellation of tiny indicators:

  • Digital backgrounds with lots of people walking behind them. This might be accompanied by an unwillingness to turn off virtual backgrounds. If someone has stated that they’re calling from home, and they’re clearly not, it should raise internal alarms for you.  
  • Unnatural response patterns, often pausing or leaning in to look at their screen. If the language feels robotic or inauthentic, there might be cause for concern.  
  • Reluctance or inability to move hands in front of their face (indicating possible deepfake use). 
  • Generic, theoretical answers that sound like an answer you’d get asking an LLM about a technology or process. 
  • Failure to provide specific examples using the STAR method. When asked about specific work experiences, they would pivot to vague theoretical situations or implementation details.  
  • Details that contradicted their resume or follow-up answers that made no sense.

Catching a scammer in the act

The most useful way to uncover fraudsters was to dig deeper into scenario-based questions, finding an aspect of their career history and asking something contrary to it. I’d do this by misattributing something, like asking them to delve into an aspect of Azure when they’d said they’d only used AWS at a company. A real candidate would immediately correct me, but the scammers would invariably get sidetracked, telling a story inconsistent with their stated experience. I would then bring this to their attention, often resulting in them panicking and ending the call.

This was the case most of the time, but in one instance, when I confronted a scammer with these inconsistencies, they seemed genuinely amused at being discovered. This led to a candid conversation about what was happening.

When I asked if they would drop their virtual background, they smiled and momentarily disabled it. What I saw shocked me. They were in what appeared to be a call center, with dozens of people all running the same scam.

Quite proudly, they explained that they were part of an organization that scrapes job postings, uses AI to generate tailored resumes, and submits them. Once interviews are scheduled, someone is selected to be the “candidate,” reads the resume, and pretends to be that person. For companies with multiple interview rounds, it might not even be the same person each time.

Their operation has one goal: to get hired, then farm the work to AI. Even if discovered, it can take companies several weeks to notice, by which time they’ve collected multiple paychecks. Sometimes, they don’t get discovered and can collect several salaries simultaneously. 

Scam-resistant hiring practices

Awareness training is essential. Everyone involved in hiring, from resume reviews to final interviews, should understand that these sorts of scams are pervasive. Based on my experience, here are practices that can help protect your organization:

  • Verification checks: ask candidates to turn off virtual backgrounds and move their hands in front of their faces. If they can’t do either, it might be time to move to the end of the conversation. 
  • Focus on depth: require in-depth personal opinions on tools and technologies. 
  • Use the STAR method: structure questions to elicit specific situations, tasks, actions, and results from the candidate’s experience. Follow up with questions about the lessons they learned and the pros/cons of their experience. 
  • Proper preparation: thoroughly review each resume before interviews.
    • Become familiar with the candidate’s claimed work history to catch subtle inconsistencies. 
  • Live technical demonstrations: replace algorithmic questions with interactive problem-solving sessions where candidates must think aloud and explain their reasoning.

Instead of spending the first 5-7 minutes of the interview explaining the company background, immediately get candidates talking.  Opening up with “Where are you calling in from today?” or other easy ice-breaker questions can establish some early rapport. This tactic allowed me to start gauging their communication patterns. I could then see if their demeanor, body language, and speaking style shift dramatically when discussing the more difficult technical questions.

If you suspect you may have hired someone under false pretenses, verify it quickly. Schedule a virtual meeting and have them walk through recent work while explaining their decisions. For technical roles, arrange pair programming sessions with trusted team members to observe their thought processes in real time.


The bigger picture

This trend has potentially detrimental implications for remote work. An in-person interview would eliminate most opportunities for this type of scam. For those of us building remote-friendly workplaces, we must refine our methods to ensure the threat of interview scams doesn’t lead to abandoning remote work, which would exclude talented workers from rural and international locations.

We are witnessing a new incarnation of the call center scam model, now targeting technical hiring. These operations are playing a numbers game. They are betting that small companies lack rigorous interview skills and that large companies follow formulaic processes with surface-level questioning. Even if they succeed only 1% of the time, that represents a significant payday.

As AI technology becomes more sophisticated, this problem will only accelerate. The good news? With awareness and a few targeted changes to our interview processes, we can stay ahead of this new breed of scammer while maintaining efficient, inclusive hiring practices.