Engineering interviews for a better candidate experience
In the first part of our series with Don Gannon-Jones, our VP of Content, we focused on the science behind the Interviewing Cloud and how it shapes technical interviews and hiring. In this post, Don and I discussed best practices for re-engineering interview experiences. For best practices throughout the entire hiring process, check out the latest research and recommendations in our Technical Hiring Guide.
Q: What advice would you give someone creating their own engineering interview questions?
Good interview questions have three main considerations:
1.) Scoring rubric. Any question you ask needs an objective scoring rubric — a scorecard. And that doesn’t have to be binary — in fact, it shouldn’t be. Instead of yes or no, it should be “this is a great answer”, or “this is an okay answer” that maybe misses some nuances we wish we had heard, or “this is a very poor answer,” when it isn’t what we’re really looking for, with clear definitions for each level. Any five people on your team should be able to look at it, and know exactly what it means and all have the same answer. That’s one of the ways you remove bias.
2.) Knowledge versus ability. The second thing is to familiarize yourself a little bit with Bloom’s taxonomy. Bloom’s taxonomy is a pyramid, and at the bottom is one of your easiest cognitive activities: knowledge. When a lot of folks write interviews, they’re asking knowledge questions. “List the six types of software test” or “list the four stages in a CI pipeline.” I can memorize that and not even know what any of it means. And so it’s not a very good interview question.
You can pick it up a little bit. Pick it up to understand, apply, and analyze. You can do just a little bit of Googling and you can find these lists of verbs associated with these levels. And if you see a verb in your question like “list”, that’s pointing to knowledge.
Aim higher. How would you turn that into a better question? Well, you turn it into a situation someone has to analyze. you want to make sure that when you’re checking your code into the source control repo, you’re not reintroducing any bugs we’ve considered run into in the past. So, we ask a candidate about the right type of software test for that use case. And if they come back saying, “Oh, a unit test would be the right one for that,” the interview signal is a lot stronger. The candidate now has to understand it, they have to know what it’s for, they have to know when it’s appropriate. That’s a much better question. And the great part about it is you’re not necessarily making it harder. You’re just doing a much better probe for the competency you’re after.
3.) Focus on deal-breakers. A lot of software engineers use the interview as kind of a “human unit test.” Any problem they’ve had with a colleague in the past — so-and-so is terrible, because they came in here and they didn’t even understand X — that becomes an interview question; to try to stop that from ever happening again. Resist that temptation. Focus on the deal-breakers. Take what comprises 80% of the job for 80% of the people, and just laser in on that stuff.
There’s always risk in hiring, and it terrifies us to make a bad hire. But, you can’t ever eliminate that risk without being incredibly unfair, and you don’t want to do that. So, get laser focused on what really, really brings value to the organization as a whole — what brings value to the organization’s customers. Maybe really understanding security problems is what brings value to your customers. Maybe it’s something else. Whatever it is, focus on those things.
Q: Do you have concrete tips that someone could use to check their engineering interview rubrics?
A phrase we use is “inter-rater reliability.” It answers this question: Between multiple people who are doing the rating, what is their reliability for giving the same answer?
You can check this. Give them all the questions you’re going to ask. Have them write down their answers, leave their names off, and then get together as a little workshop with those anonymized answers. Then, take a look at the answers together. The team might say, “Well, really those first two are kind of the same, they just change the order a bit.” In that case, great. That’s a good example.
For each answer, you want to get everyone to explain how they’d score it and why. “This one here is actually naive, because it missed these points that were asked about.” And you need to push the discussion — do we all agree with that? If you find you don’t agree, there are some options. Maybe it’s a bad question, or maybe a bad rubric. Maybe both!
But, if one candidate gives us this answer, and the five of us have totally different opinions on that answer, that’s pretty subjective and we should take that out. We don’t want subjectivity. We need to find a way to probe this competency that’s more objective.
Q: Thinking about culture questions, is there room to reduce the subjectivity by having a rubric, and knowing what you’re assessing ahead of time?
The more structured you can make every aspect of the interview, the fairer it will be — and by structure, I mean it’s deterministic. Any candidate giving the same answers should get the same score, regardless of anything else about them. If the same words come out of their mouths, they should get the same score.
You can do studies on this. You can test your interviewers, by giving them a pre-established answer, and flip the interview around. Before they evaluate candidates, you evaluate them. See if they’re going to be a good interviewer by having them talk to four different people who give substantially the same answers.
You would expect to get the same scores out of those, right? If you think about the interview from that perspective, that’s what you really want. You want it to be almost like a section of code. If the function gets the same input, it’s always going to produce the same output, it’s deterministic. That’s the ideal.
Q: How do you decouple subjectivity from objective observations?
It’s not wrong to have a section that asks “what is your subjective take on this person?” And then you can determine how much weight to give that. You can determine where it fits in the process, and keep it separate from the more scientific, objective, factual bit.
Q: So, to make the process of interviewing better, you need to create fairer job descriptions that don’t have superfluous requirements?
We’ll see customers audit their hiring process, go back to their job descriptions, take out all the things that their current folks aren’t doing, and add them to a section called “nice-to- have.” That lets them express their aspirations. You can even have a section that says “we’re hoping to grow into these five things.” But, you’re making it clear that they’re not a hard line for the job and that makes it a little bit easier.
We do know that a lot of candidates will opt out of even applying for a job if they feel unqualified. They don’t want to waste anyone’s time, and they don’t want to be embarrassed. So when you ask for that much extra, that doesn’t actually have anything to do with the job yet, then you’re really creating an unfair situation.
Q: Any final pieces of advice for hiring managers or interview candidates?
Be compassionate. We’re often in such a rush to meet our business goals and protect our current team that we forget the folks on the other side of the interview are people—often ones concerned about their families, the roofs over their heads, and more.
Don’t put gauntlets or rites of passage in their way; don’t look at them as a potential risk. Don’t do interviews that your own team would write bad Glassdoor reviews about if they had to take them.
Be compassionate. Remember, your interview is the first impression the candidate receives of your company, your organization, and your team. Make a good impression and whomever you do hire is more likely to be a loyal, valuable member of a healthy team. Make a bad impression and — well, you could be doing some of the very damage you’re trying to avoid.
Looking for more insights on technical hiring? Check out our new Technical Hiring Guide for some helpful guidelines, best practices, and trends related to every step of the hiring process — including job posting, sourcing, screening, interviewing, making an offer, and onboarding qualified candidates.
Industry Trends & Research
It may be the spookiest day of the year for those celebrating Halloween, but something else has been haunting the tech industry for quite some time: the technical hiring process. Job seekers looking for new opportunities in 2023 have had some pretty eerie experiences — experiences rife with scary strategies, terrifying tactics, and ancient rituals […]
Industry Trends & Research
The tech hiring landscape continues to shift as the world’s workforce navigates global economic uncertainty. Fortunately, these shifts create opportunities for software developers and engineers. The Bureau of Labor Statistics predicts the job market for these professionals remains resilient with an expected 26% increase in employment over the next decade. Furthermore, Karat’s 2023 Hiring Trends […]
As technology’s hiring landscape rapidly evolves, the pursuit of top talent is more competitive than ever before. To confidently hire the best tech professionals possible, talent acquisition (TA) leaders are using artificial intelligence (AI) and automation to transform how they source, evaluate, and hire. Earlier this week, Scott Bonneau, Karat’s EVP of Product and Operations, […]