AI Hiring

03.25.2025

AI in Interviews: Cheating or the New Normal?

Gordie Hanrahan image

Gordie Hanrahan

blog feature image

While interviewing in the age of AI has been a hot topic since the first iteration of ChatGPT, Amazon’s recent crackdown on candidates using AI in interviews has brought the long-simmering topic to the forefront.

The use of AI in interviews has come up in almost all of our executive briefings this year. Increasingly, the debate has shifted from how to prevent AI use in interviews, to the more philosophical question: is using AI in interviews cheating? As more engineers use AI to help them work more efficiently, there are several things to consider when answering this question. 

When the Majority Uses AI

We hear all the time from hiring managers that even when they tell candidates not to use AI, many do. One tech leader we spoke to said that 80% of candidates used a large language model (LLM) to complete their top-of-the-funnel code test, even though they were explicitly told not to. 

The percentage of AI use was so high that it created a tipping point. The company decided to ignore the cheating and just move top performers to the next round of their interview process where they could better assess their technical skills. 

While it’s easier to spot unauthorized use of AI in live interviews, it’s more difficult to tell when it comes to take-home tests. This is where AI is clearly changing the game. 

There are two issues here:

  1. Should candidates be allowed to use AI in interviews?
  2. If they are not allowed to use AI, how do you prevent it?

Should AI be allowed in interviews?

Our position is that interviews should reflect the competencies required for the job. As AI’s role in software development grows, interviews will need to evolve. Today’s requirements like memorizing syntax and data structures will become less relevant in a world where LLMs are producing meaningful amounts of code. As this transformation occurs, interviews will need to focus on the underlying skills that make strong software engineers: e.g., problem-solving, systems thinking, and handling edge cases. Engineers who possess these skills are going to be even more valuable in the future as AI turbocharges their productivity. 

When it comes to where the market stands today, many Fortune 500 companies share Amazon’s point of view. However, there’s a growing number of companies that are embracing the use of AI in interviews. They want to hire engineers who leverage new tools to enhance their skills. For example, Goldman Sachs has given its engineers access to AI coding assistant tools like GitHub Copilot and Gemini Code Assist. The company has even held competitions to foster creative AI use among developers. 

Outside the world’s biggest enterprises, early adopters are not only asking us to enable AI in interviews. They are also asking for a measure of the candidate’s proficiency using AI. This will become the norm as AI adoption in software development accelerates. We’ve always operated in an “open book” interview environment by allowing the use of Google or Stack Overflow. AI is the next logical evolution for companies that support it. 

What if AI is not allowed?

Whether you decide to allow candidates to use AI or not, it’s essential to enforce the rules. This creates a level playing field. Otherwise, the most ethical candidates are at a disadvantage, which doesn’t create a fair or predictive hiring process.

Measuring talent requires a deliberate approach and methodology, including clearly defining the specific skills you are measuring and controlling for as many variables as possible. AI usage is one such variable. If the goal is to obtain an AI-agnostic signal of a candidate’s abilities, LLM usage must be prohibited for all candidates to ensure fairness. Companies taking this approach should explicitly state that policy.

Even then, it is going to be difficult.  Any take-home assessment without a live component is increasingly vulnerable. Monitoring LLM usage in this setting is extremely difficult. This is why more companies are leaning into live interviews. Even if candidates attempt to use LLMs despite clear restrictions, human interviewers can serve as an effective safeguard. They can engage candidates in discussion, probe their understanding, and detect unauthorized AI usage.

When you’re watching someone problem-solve and generate code, it’s much easier to see if they are working through a problem or if they are copying down a memorized (or AI-generated) answer. 

One of our team’s favorite experiments illustrates this well. We show hiring managers side-by-side video snippets of AI-generated code and code written by real engineers. Then we ask them to identify which is which. When comparing only the final code samples, the distinction is nearly impossible—it’s a coin flip. But when they watch the coding process unfold, the difference becomes clear. Engineers don’t write code in a linear top-down manner; they move around, they rename variables, and they test edge cases. These inherently human behaviors make it much easier to identify AI usage in a live interview setting.

Easier said than done

Interviewers must be consistently trained to assess whether a candidate truly understands their work and to spot signs of “cheating.” Yet, defining what constitutes cheating isn’t always straightforward—nor is accurately detecting it, especially as candidate tactics evolve. 

The stakes are high: disqualifying the wrong candidate could mean losing a talented employee. Failing to catch actual cheating risks bringing in someone who doesn’t meet your standards. We’re increasingly seeing companies include clauses in offer letters that allow them to rescind an offer or terminate employment if they later discover a candidate cheated during the interview process—making Amazon just one of many taking this approach.

But by preventing engineers from using AI in interviews, companies might be hampering their ability to hire engineers who can adapt to new technologies, level up their skills, and become more productive. Instead of asking how to prevent AI use, perhaps more companies should be asking how to measure AI proficiency.

All of this is why Karat is investing heavily in developing the next generation of interview formats that integrate LLMs while assessing the competencies that will matter most in the future. Our recent Byteboard acquisition is a key part of this strategy, enabling us to design assessments that align with the AI-driven evolution of engineering work.

For additional perspectives, check out the latest LinkedIn article by Karat’s co-founder and president, Jeffrey Spector.

Ready to get started?

It’s time to start
hiring with confidence

Request Demo