• Posts
  • RSS
  • ◂◂RSS
  • Contact

  • Evaluating the Interview

    February 23rd, 2012
    ideas  [html]
    Interviewing programmers at work we ask technical questions that require knowledge of algorithms and data structures as well as coding on a whiteboard. It's the best way we know to evaluate candidates, but how good is it, really? It favors confident people who can think quickly on their feet, are comfortable showing their work verbally, are strong in an expressive language, and have done similar interviews before. It doesn't test ability to focus, conscientiousness, code reading skills, or patience. It's definitely useful, but could we be doing something better?

    The real problem here is minimal feedback on whether we're testing the right things in an interview. If we decide not to hire someone we don't get any further indication of quality, while if we do decide to it's not for months later that we can tell how good they really are. With a tiny number of training examples for "chose to hire" and none for "chose not to hire", how do we get good at picking the right people? We need to start getting more feedback on how good the interview process is at predicting what candidates would be good.

    Companies vary in their interview practices, and you expect the ones with the best methods to get better employees for their money. Those companies should have a competitive advantage so a randomly selected successful company is probably doing better interviewing. A company's interview ability is only one of many contributors to success, however, so this might not tell us much.

    Companies with good people probably do better, so instead of hiring you could look for successful companies and buy them for their employees. This makes much more sense with tiny companies (startups) because there's much less noise. This is really expensive, though, so how can we tell if it's worth it? Really, this is just another hiring method we'd like to compare.

    The simplest solution would be to start randomly hiring some of the people who failed the interview and then several months later see how well interview success correlates with performance. If you keep good notes you can check how well each of the different aspects of the interview and the candidate's background predict performance. The problem is that if our interview process is already working well we end up hiring a lot of bad programmers we wouldn't have otherwise. Sample size is also a problem.

    More complicated, a pair of companies could cross-interview. Two companies that had similar ideas of what makes a good employee could each interview a bunch of the other company's people. Then they could see how well their interview rankings predicted the other company's performance reviews. You would need to be careful, though, because this might lead to lots of poaching. A big enough company might be able to do this internally.

    I'm not very happy with any of these ideas, but I do think something along these lines would be very valuable. When you can measure results and test improvements it's possible to move quickly to a much better way. [1] Reliably identifying the employees who would help your company the most would be huge.


    [1] We do this all the time with A/B testing.

    Comment via: google plus, facebook

    Recent posts on blogs I like:

    More on the Deutschlandtakt

    The Deutschlandtakt plans are out now. They cover investment through 2040, but even beforehand, there’s a plan for something like a national integrated timetable by 2030, with trains connecting the major cities every 30 minutes rather than hourly. But the…

    via Pedestrian Observations July 1, 2020

    How do cars fare in crash tests they're not specifically optimized for?

    Any time you have a benchmark that gets taken seriously, some people will start gaming the benchmark. Some famous examples in computing are the CPU benchmark specfp and video game benchmarks. With specfp, Sun managed to increase its score on 179.art (a su…

    via Posts on Dan Luu June 30, 2020

    Quick note on the name of this blog

    When I was 21 a friend introduced me to a volume of poems by the 14th-century Persian poet Hafiz, translated by Daniel Ladinsky. I loved them, and eventually named this blog for one of my favorite ones. At some point I read more and found that Ladinsky’s …

    via The whole sky June 21, 2020

    more     (via openring)


  • Posts
  • RSS
  • ◂◂RSS
  • Contact