Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed

    Entries in AI (2)

    Tuesday
    Sep122017

    For anyone building or implementing AI for HR or hiring

    You can't swing a hammer anywhere these days without hitting an 'AI in HR' article, prediction, webinar, talk, or HR conference session. Heck, we will have a fair bit of AI in HR talk at the upcoming HR Technology Conference in October.

    But one of the important elements that the AI in HR pieces usually fail to address adequately, if at all, is the potential for inherent bias, unfairness, or even worse finding their way into the algorithms that will seep into HR and hiring decisions more and more. After all, this AI and these algorithms aren't (yet) able to construct themselves. They are all being developed by people, and as such, are certainly subject, potentially, to these people's own human imperfections. Said differently, what mechanism exists to protect the users and the people that the AI impacts from the biases, unconscious or otherwise, from the creators.

    I thought about this while reading an excellent essay on the Savage Minds anthropology blog written by Sally Applin titled Artificial Intelligence: Making AI in Our Images

    An quick excerpt from the piece, (but you really should read the entire thing)

    Automation currently employs constructed and estimated logic via algorithms to offer choices to people in a computerized context. At the present, the choices on offer within these systems are constrained to the logic of the person or persons programming these algorithms and developing that AI logic. These programs are created both by people of a specific gender for the most part (males), in particular kinds of industries and research groups (computer and technology), in specific geographic locales (Silicon Valley and other tech centers), and contain within them particular “baked-in” biases and assumptions based on the limitations and viewpoints of those creating them. As such, out of the gate, these efforts do not represent society writ large nor the individuals within them in any global context. This is worrying. We are already seeing examples of these processes not taking into consideration children, women, minorities, and older workers in terms of even basic hiring talent to create AI. As such, how can these algorithms, this AI, at the most basic level, be representative for any type of population other than its own creators?

    A really challenging and provocative point of view on the dangers of AI being (seemingly) created by mostly male mostly Silicon Valley types, with mostly the same kinds of backgrounds. 

    At a minimum for folks working on and thinking of implementing AI solutions in the HR space that will impact incredibly important life-impacting decisions like who should get hired for a job, we owe it to those who are going to be effected by these AIs to ask a few basic questions.

    Like, is the team developing the AI representative of a wide range of perspectives, backgrounds, nationalities, races, and gender balanced?

    Or what internal QA mechanisms have been put into place to protect against the kinds of human biases that Applin describes from seeping into the AI's own 'thought' processes?

    And finally, does the AI take into account differences in cultures, societies, national or local identities that us humans seem to be able to grasp pretty easily, but an AI can have a difficult time comprehending?

    Again, I encourage anyone at any level interested in AI in HR to think about these questions and more as we continue to chase more and 'better' ways to make the organization's decisions and interactions with people more accurate, efficient, effective - and let's hope - more equitable.

    Tuesday
    May092017

    Never gets tired, never stops learning

    Sharing another dispatch from the 'robots are coming to take all our jobs away' world with this recent piece from Digiday, "Who needs media planners when a tireless robot named Albert can do the job?".

    The back story of this particular implementation of AI to replace, (or as we will learn, perhaps just augment or supplement human labor), comes from advertising, where the relatively new concept of programmatic digital advertising has emerged in the last few years. Part of the process of getting things like banner ads, Facebook ads, display ads, and even branded video ads in front of consumers involves marketers choosing the type of ads to show, the content of those ads, the days/times to show the ads, and finally the platforms upon which to push the ads to.

    If it all sounds pretty complex to you, then you're right.

    Enter "Albert." As per the Digiday piece once the advertiser, (in this case Dole Foods), set some blanket objectives and goals, then Albert determined what media to invest in at what times and in what formats. And it also decided where to spend the brand’s budget. On a real-time basis, it was able to figure out the right combinations for creative and headlines.  For example, once Albert determined that Dole’s user engagement rate on Facebook was 40 percent higher for mobile than desktop, Albert shifted more budget to mobile.

    The results have been impressive; According to Dole, the brand had an 87 percent in increase in sales versus the prior year.

    Why bring this up here, on a quasi-HR blog?

    Because it highlights really clearly, a real-life example of the conditions of work that are most ripe for automation, (or at least augmentation). Namely, a data-intensive, detailed, and heavy data volume environment that has to be analyzed, a fast-moving and rapidly paced set of changing conditions that need to be reacted to in real-time, (and 24/7), and finally, the need to be constantly assessing outcomes and making comparisons of choices in order to adjust strategies and execution plans to optimize for the desired outcomes.

    People are good at those things. But AI like Albert might be (probably are) better at those things.

    But in the piece we also see the needed and hard-to-automate contributions of the marketing people at Dole as well.

    They have to give Albert the direction and set the desired business goals - sales, clicks, 'likes', etc.

    They have to develop the various creative content and options from which Albert will eventually choose to run. 

    And finally, they have to know if Albert's recommendations actually do make sense and 'fit' with the overall brand message and strategy.

    Let's recap: People - set goals, strategic objectives, develop creative content, and "understand" the company, brand, context, and environment. AI: executes at scale, assesses results in real-time, optimizes actions in order to meet stated goals, and provides openness into the actions it is taking.

    It sounds like a really reasonable, and pretty effective implementation of AI in a real business context.

    And an optimistic one too, as the 'jobs' that Albert leaves for the people to do seem like the ones that people will want to do.