The following article originally appeared on Forbes.com under the headline, Remote Interviewing: How To Evaluate Competencies that Matter.
By Mohit Bhende
Last month I outlined six ways to tune technical interview loops for 100% remote hiring. Since then, the world has continued to adapt in the face of COVID-19, but one thing has remained resilient: the demand for software engineers.
For my next few posts, I’ll take a deeper dive into each of the areas I covered last month to help companies achieve their engineering hiring goals. These posts will offer concrete actions that interviewers can take to make every interview more predictive, fair, and enjoyable for both companies and candidates.
The first step in any hiring process is identifying the skills and competencies that a candidate needs to be successful on the job. However, as anyone who’s ever read a bad job description can surely attest, it’s a step that’s often overlooked.
This can be true for both in-person and remote interview loops, but without the in-person communication of an on-site interview, it’s especially critical that competencies are clearly defined and communicated in remote hiring programs.
In my last piece, I recommended that companies assign owners to review specific job descriptions and responsibilities and align these elements to specific competencies.
To get more predictive signals, remote interviewers should ask themselves three things:
- Are the competencies relevant to success on the job?
- Does the candidate understand how they’re being evaluated?
- Am I measuring each competency separately and predictively?
Establish Relevant Competencies
The output of this process hinges on competencies that are relevant and predictive of job success. One helpful way to think about competencies to assign to a role is to think about how the person is going to be evaluated in their performance review at the end of the year.
Will the employee’s performance review focus on broad aptitudes such as language skills, reading comprehension, cultural awareness and easily acquired knowledge (i.e., the ability to quickly Google answers in an interview)? Or will the employee’s performance review look at metrics like the complexity of problems addressed, code quality, and velocity?
While many of us have experienced interviews that inadvertently test the former set of attributes, it is much more predictive of on-the-job success to align with competencies that actually matter.
For a senior software engineering role, the competencies are more likely to include specific language skills such as Java, demonstrated ability to understand and articulate business logic complexity, and ability to complete an architecture review.
Tell The Candidate How They’re Being Evaluated
Next, it is imperative that candidates understand what competencies you’re assessing. What are the mechanisms you have in place to clearly communicate to the candidate they are being assessed for code quality, optimality vs. speed or the ability to deal with ambiguity?
Hiring managers should clearly define the competencies they’re looking for, and interviewers should clearly articulate those competencies to candidates so they know whether to deliver complete code or an outline, if a brute-force solution is good enough, or if they need to take the next step to optimize it.
Measure Each Competency Separately And Predictively
Each competency needs to be measured intentionally.
Many interviewers use questions to measure several competencies at once: Can this person code quickly? Can they think out loud? Can they speed-read four paragraphs of English text to understand a problem brief? Does the candidate know the rules to the same board games as the interviewer? This is detrimental because it doesn’t isolate a variable to assess the competency and because it introduces noise through measuring competencies that may not prove relevant to the job.
For example, candidates are often required to test their code in an interview. Asking them how they would test it, as opposed to embedding it into a coding exercise, will isolate code testing as a skill.
Isolating and then evaluating competencies boosts the value of each interview segment and gives you a more predictive signal.
Also, be on the lookout for unbalanced multipart questions that build on code implementation.
For instance, if you ask a candidate to find the most common letter in a sentence (“sentence A”) as part of an introductory question, one perfectly acceptable approach would be to sort the letters in the sentence and iterate through it once. Another acceptable approach would be to build a frequency map for the letters in the sentence and choose the maximum value. Both of these solutions have roughly equal merit.
However, if the follow-up question is to find the most common letter in sentence A that does not appear in sentence B, the candidate who took the second approach would have a head start and advantage, despite not having demonstrated any measurably better skills.
Interviews should use questions that are conceptually linked, but not interdependent from a code standpoint.
Remote Interviewing Next Steps
Once you have defined relevant competencies, communicated them to your candidates, and measured them consistently, the next steps are removing ambiguity and training your interviewers to deliver the questions consistently — both of which I’ll tackle in future articles.
Until then, stay home; stay safe, and don’t ask unbalanced remote interview questions!