Karat has acquired leading adaptive assessment technology from Triplebyte. Learn more.
Making technical interviewing predictive, fair, and enjoyable with the Interviewing Cloud.
Learn how Karat supports leading technical recruiting processes.
Explore our best practices, research, customer stories, and more.
Our flagship purpose program, created to empower a new generation of Black software engineers.
Our mission is to make interviews fair, predictive, and enjoyable.
What developer candidates need to know about the Karat interview.
Karat’s approach to technical interviews
Congratulations! You’ve been invited to take a Karat interview. This likely means that a prospective employer has reviewed your application and resume, conducted a recruiter screen, and selected you from the pool of applicants to move forward in the technical hiring process. The next step –whether you’re interviewing with Karat or another company – will involve some kind of technical assessment to measure the key skills that you need for success on the job. But what is a technical interview, and what does it even measure?
You’re (hopefully) excited. You’re (probably) nervous. Or maybe you’re annoyed at the prospect of a coding test. It doesn’t take much effort to uncover what candidates think of most code tests. Every few Reddit posts, there’s a candidate expressing some form of being “burned out interviewing and on the last round for the on-site I keep getting BS coding questions in [LANGUAGE OF THE WEEK]. Literally I’m doing a bunch of exercises which have nothing related to the job.”
As frustrating as the process can be, there’s a reason we’re doing this. To understand it, let’s start by putting the technical screening interview in context.
The technical interview falls between sourcing and the full interview process. You can think of it as a hybrid of those two phases. Similar to the sourcing phase, screening interviews are used on a larger pool of applicants, so it’s important to the hiring company that this phase isn’t too costly in terms of time, effort, or the money that pays for that time and effort. Conversely, screenings are more like other interviews than sourcing, in that they’re generally a dedicated, active assessment of a given candidate. While sourcing can often rely on information about a candidate, once we start the screening process, we typically ask the candidate to work on a specific task, so that we can assess it fairly compared to other candidates who have done the same thing.
Ultimately, the process of a technical interview is asking one question: What is the likelihood that this candidate would succeed in this role? A hypothetical perfect interview would always give us the correct answer to that question. This may appear to be impossible to achieve, but it’s straightforward if we can ignore our other constraints. If we simply hire the candidate and see if they succeed, that will give us an accurate answer every time. The downsides are obvious, though: we can’t hire everyone who applies. Even if it turns out to be possible to have all applicants work on the same task in parallel, the expense of paying everyone for redundant work isn’t viable. And if we expect everyone to complete the task unpaid, we raise the serious question of how they will meet their own needs while doing this work.
But this hypothetical perfect interview isn’t useless! While we can’t conduct a technical interview in its ideal form, we can adapt it into something more viable. We’ll have to make it less effective as we adapt it, but that’s okay: even a small reduction in effectiveness can let us make the process far more manageable!
With that in mind, let’s talk about some changes we could make to the “ideal” interview to make it more feasible, and see how it evolves into the real-world interview process. Along the way, we can build an understanding of what each component of the interview process is for, and what we can expect from each of them.
These components are:
When we think about that ideal interview, one major challenge to making it function is the amount of work both sides need to put in before receiving any results. Many technical jobs take weeks to understand important details particular to that company or role, and months before someone is fully up-to-speed in a new job. If we depend on all of that to be part of the interview process, we’re setting up an enormously protracted process, in which the hiring team might not find the right person for many months. We’d also have created a process in which it’s almost impossible to interview for a new job while working in a different role.
As software developers, we address this sort of uncertainty in long technical projects by front-loading risk: we begin by working on the parts of the system that we are most concerned might fail, even if it’s only by building a quick experimental prototype. We can apply the same process to our hypothetical interview to reduce the time-commitment. If there are tasks that are particularly important to the role, we want to ensure the candidate gets an opportunity to try them early in the process, so that we learn as soon as possible whether they’ll be able to complete them on the job.
Of course, not all tasks can be conducted on demand. Some tasks require the right circumstances to demonstrate. If we’re hiring someone who will be responsible for giving large public presentations, it’s probably not feasible to get a thousand people together in an auditorium as part of the hiring process.
Similarly, if someone will be responsible for large-scale software design, we probably can’t build a truly complex system in a reasonable amount of time. But we have some techniques we can use to approximate these situations. First, we can ask them to do related tasks from which we can reasonably evaluate their capabilities, then extrapolate to the larger task. If we can’t arrange a packed auditorium for an interview, we might ask them to present to a small panel, under the reasonable theory that someone who has difficulty in the pared-down situation is not likely to be better in the full-scale situation. Of course, this relies on making sure that the smaller, related task is a good proxy for something important in the role. We’ll discuss that more in a moment.
Ultimately, what these techniques allow us to do is to restructure our ideal interview process to be more feasible. First, we want to get information about key abilities as soon as possible – if there’s something that could prevent the candidate from succeeding, we want to identify it early and let both sides move on to try again. We also want to evaluate harder-to-observe abilities earlier, by using smaller examples that give us data without the effort involved in the full-scale version of the task.
In this framework, one reasonable way of conducting the process could be to front-load an initial check for the most critical abilities, and keep it strictly time-limited so that the candidate and team know what to expect. Then, for candidates who demonstrate the key capabilities of the role, we can move on to a more in-depth set of opportunities to show those off, and use that larger data-set to select the candidates who are most likely to succeed. Those candidates can move forward to the practical evaluation of actually trying to do the job, with the full support of the hiring team.
This is how a good interview process is structured – a series of opportunities to get more information about a candidate, each phase taking more effort from both candidate and team than the previous one.
So, based on all of that, why would an interviewer ask you to write code that solves a small problem you’ll probably never encounter?
As you may have guessed, the answer has to do with those proxy tasks mentioned earlier. While we like to imagine that we should solve a “real world” problem in an interview, ask most working software developers to discuss the problems we’ve been working on, and you’ll find the explanations aren’t quick. The real day-to-day work of software relies on a huge network of understanding: the problem being solved, in all its details, the frameworks, protocols, and dependencies that are being relied on and interacted with, and the past and future of the codebase itself. To solve those “real world” problems, we’d be running into the same constraints as in our “ideal” interview: an interview that takes enormous upfront effort for marginal benefit over something much faster for both candidate and team.
For that reason, effective interview questions are scale models of real-world problems, constructed to get at the kernel of the technical task from a real-world situation, or a category of them, without requiring much context. Of course, not every interview question is a good one. Sometimes people just ask questions they’ve heard other people ask, without really understanding what the question should be trying to assess. Other times people ask questions that assess a skill that isn’t essential for the job, because they’re good at it and think other developers should be as well. But the reason technical coding questions exist is that, when they’re done well, they give candidates an opportunity to demonstrate some set of core programming skills in a brief enough time to fit in around people’s busy lives.
We’ve talked a lot about interviewing theory, but what about the interview experience? It’s the role of an interviewer to guide the candidate through the entire interview experience. A good interviewer puts the candidate at ease. They deliver the questions clearly and unambiguously, and they partner with the candidate to bring out their best and let the candidate demonstrate their full abilities. The interview experience should reflect the job, so unless your engineering culture is particularly adversarial and toxic (and that’s a deeper issue to address!), your interview shouldn’t feel that way to a candidate. Similarly, if you expect engineers to make use of reference material on the job, why would you artificially exclude that from the interview?
Good interviewers practice asking the questions that map to the core competencies that their organization is evaluating. At Karat, we have dedicated interview engineers performing that role: software developers from across the industry who practice their interviewing skills and regularly conduct interviews. Our team of content engineers create the questions and test that they measure the job-relevant skills they’re intended to. To ensure that our interviews are fair and consistent, Karat implements quality control processes, like recording and reviewing the interviewers’ performance, and analyzing the effectiveness of questions throughout their use. This helps us keep interviews as rigorous as other aspects of the engineering discipline.
People tend to use a wide spectrum of adjectives, and having remote interviewers can exacerbate this. If we’re all in a room together, we can have a discussion to work out whether a candidate who earns scores of “above average,” “good,” “satisfactory,” “OK,” and “great,” from five different interviewers is worth inviting to the next round. If we want to make decisions without getting everyone together at once, we need to compile those as data points, which can magnify the inconsistencies.
At Karat, we solve this with a structured scoring rubric that measures specific competencies on a consistent scale. This makes it possible to aggregate feedback and compare it from interviewer to interviewer, and allows for more consistent reporting around areas such as solution optimality, completion, guidance, and debugging.
Here’s an example of one possible rubric for a structured evaluation of implementation quality.
We use these rubrics to generate an interview summary for each candidate, along with a recommendation related to the company’s hiring bar. We share this report, as well as the interview itself, with the hiring team, who uses the information to determine the next step in the process.
While we focus on creating a consistent and concise process for candidates, it’s not the only possible process. Across the industry, approaches used for technical interviews still vary widely. Our own developers have experienced automated coding tests, static code challenges, whiteboard interviews, and week-long take-home assignments, and we’re sure there are more approaches out there that we haven’t seen yet. Each of these approaches has its own strengths and weaknesses, but the underlying goal is the same: they’re all an attempt to learn how someone would do in the actual job, in a more efficient and fair way than just throwing them into the job would be.
At Karat, our mission is to make every interview predictive, fair, and enjoyable. We developed our approach to strike a balance between gathering nuanced and accurate data about a candidate’s skills, and the time and effort the candidate has to put in to produce that data.
It’s our hope that in your interview, we’re able to accomplish that without driving you to join those frustrated voices on Reddit wondering what this is even for.
To all candidates out there, please continue sharing your feedback in our post-interview surveys, and good luck on your hiring journeys!
Your email address will not be published. Required fields are marked *