Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed
    « n = 1 | Main | CHART OF THE DAY: The Aging Global Population »
    Thursday
    Jun152017

    Learn a new word: Positive Predictive Value

    Predictive analytics and the application of algorithms to help make 'people' decisions in organizations has been a subject of development and discussion for several years now. The most common applications of predictive tech in HR have been to assess and rank candidates for a given job opening, to estimate an individual employee's flight risk, and to attempt to identify those employees with high potential or are likely to become high performers.

    These kinds of use cases and others and the technologies that enable them present HR and business leaders with new and really powerful tools and capabilities that can, if applied intelligently, provide a competitive edge realized from the better matching, hiring, and deploying of talent.

    But you probably know all this, if you are reading an HR Tech blog anyway, and perhaps you are already applying predictive HR tech in your organization today. But there is another side or aspect of prediction algorithms that perhaps you have not considered, and I admit I have not really either - namely how often these predictive tools are wrong, and somewhat related, how we want to guide these tools to better understand how they can be wrong.

    All that takes us to today's Learn a new word - 'Positive Predictive Value (PPV)'

    From our pals at Wikipedie:

    The positive and negative predictive values (PPV and NPV respectively) are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative results, respectively. The PPV and NPV describe the performance of a diagnostic test or other statistical measure. A high result can be interpreted as indicating the accuracy of such a statistic.

    A good way to think about PPV and NPV is using the example of an algorithm called COMPAS which attempts to predict the likelihood that a convicted criminal is likely to become a repeat offender, and has been used in some instances by sentencing judges when considering how harshly or leniently to sentence a given criminal.

    The strength of a tool like COMPAS is that when accurate, it can indicate to the judge to give a longer sentence to a convict that is highly likely to be a repeat offender, and perhaps be more lenient on an offender that the algorithm assesses to be less likely to repeat their crimes once released.

    But the opposite, of course is also true. If COMPAS 'misses', and it sometimes does, then it can lead judges to give longer sentences to the wrong offenders, and shorter sentences to offenders who end up repeating their bad behaviors. And here is where PPV really comes into play.

    Because algorithms like the ones used to create COMPAS, and perhaps the ones that your new HR technology uses to 'predict' the best candidates for a job opening, tend to be more or less wrong, (when they are wrong), in one direction. Either they generate too many 'matches', i.e., recommend too many candidates as likely 'good hires' for a role, including some who really are not good matches at all. Or they produce too many false negatives, i.e. they screen out too many candidates, including some that would indeed be good fits for the role and good hires.

    Back to our Learn a new word - Positive Predictive Value. A high PPV result for the candidate matching algorithm indicates that a high number of the positives, or matches, are indeed matches. In other words, there are not many 'bad matches' and you can in theory trust the algorithm to help guide your hiring decisions. But, and this can be a big but, a high PPV can often produce a high negative predictive value, or NPV.

    The logic is fairly straightforward. If the algorithm is tuned to ensure that any positives are highly likely to truly be positives, then fewer positives will be generated, and more of the negatives, (the candidates you never call or interview), may have indeed been actual positives, or good candidates after all.

    Whether it is a predictive tool that the judge may use when sentencing, or one your hiring managers may use when deciding who to interview, it is important to keep this balance between false positives and incorrect negatives in mind.

    Think of it this way - would you rather have a few more candidates than you may need get screened 'in' to the process, or have a few that should be 'in' get screened 'out', because you want the PPV to be as high as possible?

    There are good arguments for both sides I think, but the more important point is that we think about the problem in the first place. And that we push back on any provider of predictive HR technology to talk about their approach to PPV and NPV when they design their systems.

    PrintView Printer Friendly Version

    EmailEmail Article to Friend

    References (1)

    References allow you to track sources for this article, as well as articles that were written in response to this article.
    • Response
      Thanks for the steam codes.

    Reader Comments

    There are no comments for this journal entry. To create a new comment, use the form below.

    PostPost a New Comment

    Enter your information below to add a new comment.

    My response is on my own website »
    Author Email (optional):
    Author URL (optional):
    Post:
     
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>