Read The Total Economic Impact™ of Karat to learn how our technical interviews can save time for high-value work. Read the study.
Making technical interviewing predictive, fair, and enjoyable with the Interviewing Cloud.
Learn how Karat supports leading technical recruiting processes.
Explore our best practices, research, customer stories, and more.
Our flagship purpose program, created to empower a new generation of Black software engineers.
Our mission is to make interviews fair, predictive, and enjoyable.
What developer candidates need to know about the Karat interview.
OpenAI’s ChatGPT shows us that the future of interviewing is human
While many companies use automated coding tests as the first step of their software engineering interview, recent developments in generative Artificial Intelligence (AI) will challenge this entire system. On November 30, 2022, OpenAI released a research version of their new AI-powered chatbot, ChatGPT. This chatbot has the ability not only to answer users’ questions but to write code in response to text prompts including a detailed explanation of how the code works, and test cases showing the output.
Automated coding tests are seen as a cost-effective filtering mechanism to find the right candidates to bring on-site in order to build world-class engineering teams. With these automated tests, dishonesty has always been a key concern for companies. As candidates have access to the internet while taking the test, they can always search for similar questions, making it hard to evaluate a candidate’s true technical abilities. In practice, engineers use existing solutions on sites like Stack Overflow to see how others have solved similar problems and build an ideal solution to the challenge they are facing. With the lack of content in an automated coding test (companies only see the candidate’s final answers and not how they got there), it becomes difficult to distinguish between candidates who use other code to create an inspiring solution and candidates who use other code without really understanding it.
Automated test providers have historically managed this problem in two ways. First, they develop large banks of questions that continually rotate, reducing the chance that a candidate can find the same question online. And second, they use plagiarism detection tools to see how closely a candidate’s answers are to those found online and those submitted by previous candidates. This strategy can sometimes work because originality is a strong indicator that the engineer did use the required skills to develop the solution.
Taking a common question that someone might encounter in an automated coding test, ChatGPT easily creates a solution, explanation, and test output.
While it would have previously taken a developer time first to devise a solution and then write the code, ChatGPT completes the entire process in seconds.
With these advances in AI, candidates trying to solve a coding test aren’t limited to searching for previous answers on Stack Overflow. Instead, they can ask the chatbot for the answer directly, and have the AI customize the code to whatever type of solution they are looking for in whatever language they want to program. While people are still exploring ChatGPT, so far it has also been able to develop solutions to more difficult common coding challenges such as the Coin Change Problem or Dining Philosophers Problem.
The ability of AI to turn a word problem into an algorithmic coding solution makes it incredibly difficult to moderate offline coding challenges. You cannot be sure whether the solution was written by the candidate, or by a machine. Moreover, because there can be many different solutions to the same problem created by AI, it will become increasingly hard to check for similarities to existing code. In theory, AI could create original solutions to coding problems in the same way a person could.
With AI, automated coding tests become less valuable not just because of the difficulty in assessing technical abilities, but because AI is likely to change the way software engineering operates. There is a very real future where these tools enable developers to do things a lot faster than they do today–where developers use AI to write and debug code so they can spend more time tackling the most complex problems.
Karat recognizes how AI is already changing the future of software engineering. Assessing developers’ skills in a natural environment–one that includes referencing Stack Overflow, Google searches, or even generative AI tools–allows hiring managers to get a true picture of a candidate’s abilities. This will allow employers to build the engineering teams of a future that will thrive in a world where AI becomes a commonplace tool in helping engineers write and debug code.
As AI continues to improve, we see a future with fewer automated coding tests that need a candidate to reach a known solution, and more interviews with a person that test how a candidate approaches, explains, and solves problems that have many possible answers (perhaps even using AI to help write the code). This future includes Interview Engineers (professional software developers who facilitate live technical assessments), subject matter experts that are continually developing new interview content and formats, and a coding studio that allows candidates to work alongside the engineer conducting the interview. The future of interviewing will help both candidates and employers take advantage of the changes AI presents for technical recruiting. By assessing developers’ skills in a natural environment and discussing problems rather than simply solving them, Karat empowers companies to better recruit and hire world-class technical talent.
Finally, as we explore the abilities of ChatGPT and future iterations of AI, we wanted to know, How would ChatGPT perform in an actual Karat Interview? To do this, we had a Karat Interview Engineer run a technical interview with ChatGPT as the candidate, typing the questions into the interface rather than asking them out loud.
The result? It wasn’t great. ChatGPT was able to solve an introductory question quickly. But technical interviews aren’t just about producing code. They are an assessment of problem-solving abilities and decision-making, and this is where ChatGPT fell short. The bot struggled to adapt the code to account for more complex scenarios in follow-up questions and it wasn’t able to respond to probing questions about its decisions. We’ll be publishing a full recap of the interview with more details in the coming week, so stay tuned for a deeper look at what happens when an AI bot sits down with an Interview Engineer in real life!