Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.

    free counters

    Twitter Feed

    Entries in AI (11)

    Thursday
    Mar082018

    CHART OF THE DAY: The Rise of the Smart Speaker

    There is pretty good evidence that the rate of mainstream adoption of new technologies is significantly more rapid than it has been in the past. It took something like 60 or 70 years for the home-based, land line telephone to achieve over 90% penetration in US homes once the technology became generally available.

    Fast forward to more recent technology innovations like the personal computer or the mobile phone and time for widespread adoption has diminished to just a couple of decades (if not less for modern tools and solutions like social media/networking apps).

    New tech, when it 'hits', hits much faster than ever before and its adoption accelerates across mainstream users much faster as well. Today's Chart(s) of the Day, courtesy of some research done by Voicebot.ai show just how prevalent the smart speaker, a technology almost no one had in their homes even two years ago, have become.

    Chart 1 - Smart Speaker Market Penetration - US

     

    About 20% of US adults are in homes that have one of these smart speakers enabled. It may not sound like much, but think about it - how many times had you seen one of these say as recently as 2016?

    Chart 2 - Smart Speaker Market Share - US

    No surprise, to me at least, that Amazon has the dominant position in the US in terms of smart speakers. They beat their competitors to this market, and their platform, Alexa, has become pretty synonymous with the entire voice assistant technology. If I were a company looking to develop solutions for voice, I would start with Alexa for sure.

    Once people, in their 'real lives' begin to adopt a technology solution in large numbers, they begin to seek, demand, and expect these same kinds of technologies will be available and tailored to their workplace needs as well. The data shows that smart speakers like the Echo and the Google Home device are gaining mainstream adoption really, really quickly.

    If your organization has not yet started to think about how to deploy services, information, and access to organizational information via these smart speakers and their platforms like Alexa I wouldn't say you are late, but you are getting close to being late.

    Better to be in front of a freight train rolling down the line than it is to get run over by it.

    Last note - stay tuned for an exciting announcement in this space from your pals at the HR Happy Hour Show.

    Tuesday
    Feb062018

    Automated narratives

    We are soon going to reach, if we haven't yet, 'Peak Artificial Intelligence' I think.

    There have been a million examples of 'AI will replace XYZ' or 'AI for 'Insert your favorite process here'' pieces and developments in the last couple of years, and if you and your organization is not at least thinking about incorporating AI into your business processes, well, the conventional thinking goes, you are going to be left behind. I suppose time will tell on that. I think the adage (was it from Bill Gates?), that we tend to overestimate the impact of new technology in the short term, and underestimate its impact in the long term probably applies to AI as well. AI is definitely coming to a business process near you, it is just a little unclear how long it will be and how much impact it will have on your organization, people, and business.

    But one fairly common theme in all the talk about AI (and automation more generally), is that it will effect and potentially replace more mundane, repetitive, rules-heavy, and precisely defined processes and roles (at least initially), while leaving creative, nuanced, complex, and more sophisticated processes and roles to the humans, (at least for now). Robots are going to take the wareghouse jobs and maybe some/most of the cashier jobs, but 'creative' types like marketers and advertising folks for example would be largely safe from automation. While Watson can win ay Jeopardy! and Google can build a machine to win at Go, no AI can come up with say, one of the amazing ads we just saw on the Super Bowl. Right?

    But wait...

    Check out this excerpt from a piece on Ad Week - 'Coca-Cola Wants to Use AI Bots to Create Its Ads'

    Coca-Cola is one of the most beloved brands in the world and is known for creating some of the best work in the advertising industry. But can an AI bot replace a creative? Mariano Bosaz, the brand’s global senior digital director, wants to find out.

    “Content creation is something that we have been doing for a very long time—we brief creative agencies and then they come up with stories that they audio visualize and then we have 30 seconds or maybe longer,” Bosaz said. “In content, what I want to start experimenting with is automated narratives.”

    In theory, Bosaz thinks AI could be used by his team for everything from creating music for ads, writing scripts, posting a spot on social media and buying media. “That’s a long-term vision,” he said. “I don’t know if we can do it 100 percent with robots yet—maybe one day—but bots is the first expression of where that is going.

    It is one thing when a manufacturing executive states that he or she wants to automate some or most aspects of a manufacturing or assembly process and reduce levels of human employment in favor of technology - we are coming to expect that robots and tech and AI are simply inevitably going to do those jobs in the future.

    But it is kind of a different thing entirely to hear a 'creative' executive from one of the world's largest companies and most recognized brands to openly discuss how technology like AI can and probably will begin to take over some or even most parts of a highly creative, expressive process like developing advertising content. We don't, or at least I don't, like to think of these kinds of tasks and jobs as ones that could also fall into the category of 'We are better off having a robot do that'. I mean, (trying) to be creative is mostly how I make a living. Emphasize the 'trying' part.

    'Automated narratives', for some reason that term stuck out for me when I read the Ad Week piece. Hmm. Probably need to think about that a little longer.

    But while I am pondering, I will end with the disclaimer that this post, (and so far, all the posts on this blog), was 100% produced by a person. Although some days I wish I had access to a blog-writing 'bot.

    Have a great day!

    Wednesday
    Nov152017

    Self-driving bus crashes, proving all buses should be self-driving

    In case you missed it, a fairly significant pilot of self-driving vehicles, in this case shuttle buses, launched last week in Las Vegas. In this test, shuttle buses developed by French company Navya ARMA will carry passengers along a half-mile route in downtown Las Vegas, (that part of Vegas that most of us who go to Vegas for Conference and conventions tend to ignore). The Navya ARMA buses rely on GPS, cameras, and light-detecting sensors in order to navigate the public streets. According to reports, the year long test hopes to shuttle about 250,000 passengers up and down the Vegas streets.

    Pretty cool, right?

    Guess what happened in the first couple of hours after launching the self-driving pilot program?

    Yep, a CRASH.

    The first self-driving bus was in a minor accident within a couple of hours of the service's launch when a (human driven) delivery truck failed to stop in time and collided with the stationary shuttle bus.

    According to a spokeperson from the American Automobile Association, "The truck making the delivery backed into the shuttle which was stopped. Human error causes most traffic collisions, and this was no different."

    No one was hurt, the damage was minor, and the self-driving pilot program continues in Las Vegas.

    Why bring this up, especially on a blog that at least pretends to be about work, HR, HR Tech, etc.?

    Because these kinds of technology developments, of self-driving vehicles, robots that can sort and organize inventory in warehouses, robots that will greet and provide basic customer services in retail environments and hotels, are being developed, improved, and deployed at increasing rates and in more and more contexts.

    Self-driving technology in particular, especially for commercial vehicles, is by some estimates within 10 years of becoming a mainstream technology, potentially displacing hundreds of thousands of commercial truck drivers. And as an aside, this piece describes how the trucking industry is clearly not ready for this and other technological disruptions.

    This is not meant to be another, tired, 'Robots are taking our jobs' post, but rather another reminder that technology-driven disruption will continue to change the nature of work, workplaces, and even our own ideas about the role of people in work and the economy. And HR and HR tech leaders have to take a leading role in how, where, when, and why their organizations navigate these changes, as they sit directly at the intersection of people, technology, and work.

    And lastly, if that Las Vegas delivery truck had been equipped with the same kinds of self-driving tech that the Nayva ARMA bus has, there is almost no chance there would have been an accident.

    But it might have be fun if it happened anyway. I'd love to see two 'robot' trucks argue with each other on the side of the road about which one was the doofus who caused the accident.

    Have a great day!

    Wednesday
    Nov082017

    Looking for bias in black-box AI models

    What do you do when you can't sleep?

    Sometimes I watch replays of NBA games, (how about my Knicks?), and sometimes I read papers and articles that I had been meaning to get to, but for one reason or another hadn't made the time.

    That is how I spent an hour or so with 'Detecting Bias in Black-Box Models Using Transparent Model Distillation', a recently published paper by researchers at Cornell, Microsoft, and Airbnb. I know, not exactly 'light' reading.

    Full disclosure, I don't profess to have understood all the details and complexity of the study and research methods, but the basic premise of the research, and the problem that the researchers are looking to find a way to solve is one I do understand, and one that you should too as you think about incorporating AI technologies into workplace processes and decision support/making.

    Namely, that AI technology can only be as good and as accurate as the data it’s trained on, and in many cases we end up incorporating our human biases into algorithms that have the potential to make a huge impact on people’s lives - like decisions about whom to hire and promote and reward.

    In the paper, the researchers created models that mimic the ones used by some companies that created 'risk scores', the kinds of data that are used by a bank to decide whether or not to give someone a loan, or for a judicial administration to decide whether or not to give someone early parole. This first set of models is similar to the ones that these companies use themselves.

    Then the researchers create a second, transparent, model that is trained on the actual outcomes that the first set of models are designed to predict - whether or not the loans were paid back and whether or not the parolee committed another crime. Importantly, these models did include data points that most of us, especially in HR, are trained to ignore - things like gender, race, and age. The researchers do this intentionally, and rather than me try to explain why that is important, read through this section of the paper where they discuss the need to assess these kinds of 'off-limits' data elements, (emphasis mine):

    Sometimes we are interested in detecting bias on variables that have intentionally been excluded from the black-box model. For example, a model trained for recidivism prediction or credit scoring is probably not allowed to use race as an input to prevent the model from learning to be racially biased. Unfortunately, excluding a variable like race from the inputs does not prevent the model from learning to be biased. Racial bias in a data set is likely to be in the outcomes — the targets used for learning; removing the race input race variable does not remove the bias from the targets. If race was uncorrelated with all other variables (and combinations of variables) provided to the model as inputs, then removing the race variable would prevent the model from learning to be biased because it would not have any input variables on which to model the bias. Unfortunately, in any large, real-world data set, there is massive correlation among the high-dimensional input variables, and a model trained to predict recidivism or credit risk will learn be biased from the correlation between other input variables that must remain in the model (e.g., income, education, employment) and the excluded race variable because these other correlated variables enable the model to more accurately predict the (biased) outcome, recidivism or credit risk. Unfortunately, removing a variable like race or gender does not prevent a model from learning to be biased. Instead, removing protected variables like race or gender make it harder to detect how the model is biased because the bias is now spread in a complex way among all of the correlated variables, and also makes correcting the bias more difficult because the bias is now spread in a complex way through the model instead of being localized to the protected race or gender variables. ŒThe main benefi€t of removing a protected variable like race or gender from the input of a machine learning model is that it allows the group deploying the model to claim (incorrectly) that they model is not biased because it did not use the protected variable.

    This is really interesting, if counter-intuitive to how most of us, (me for sure), would think about how to ensure that AI and algorithms that we want to deploy to evaluate data sets for a process meant to provide decision support for the 'Who should we interview for our software engineer opening? question.

    I'm sure we've seen or heard about AI for HR solutions that profess to eliminate biases like the ones that have existed around gender, race, and even age from important HR processes by 'hiding' or removing the indicators of such protected and/or under-represented groups.

    This study suggests that removing those indicators from the process and the design of the AI is exactly the wrong approach - and that large data sets and the AI itself can and will 'learn' to be biases anyway.

    Really powerful and interesting stuff for sure.

    As I said, I don't profess to get all the details of this research but I do know this. If I were evaluating an AI for HR tool for something like hiring decision support, I probably would ask these questions of a potential provider:

    1. Do you include indicators of a candidate's race, gender, age, etc. in the AI/algorithms that you apply in order to produce your recommendations?

    If their answer is 'No we don't include those indicators.'

    2. Then, are you sure that your AI/algorithms aren't learning how to figure them out anyway, i.e., are still potentially biased against under-represented or protected groups?

    Important questions to ask, I think.

    Back to the study, (in case you don't slog all the way through it). The researchers did conclude that for both large AI tools they examined, (loan approvals and parole approvals), the existing models did still exhibit biases that they professed to have 'engineered' away. And chances are had the researchers trained their sights on one of the HR processes that AI is being deployed in, they would have found the same thing.

    Have a great day!

    Tuesday
    Sep122017

    For anyone building or implementing AI for HR or hiring

    You can't swing a hammer anywhere these days without hitting an 'AI in HR' article, prediction, webinar, talk, or HR conference session. Heck, we will have a fair bit of AI in HR talk at the upcoming HR Technology Conference in October.

    But one of the important elements that the AI in HR pieces usually fail to address adequately, if at all, is the potential for inherent bias, unfairness, or even worse finding their way into the algorithms that will seep into HR and hiring decisions more and more. After all, this AI and these algorithms aren't (yet) able to construct themselves. They are all being developed by people, and as such, are certainly subject, potentially, to these people's own human imperfections. Said differently, what mechanism exists to protect the users and the people that the AI impacts from the biases, unconscious or otherwise, from the creators.

    I thought about this while reading an excellent essay on the Savage Minds anthropology blog written by Sally Applin titled Artificial Intelligence: Making AI in Our Images

    An quick excerpt from the piece, (but you really should read the entire thing)

    Automation currently employs constructed and estimated logic via algorithms to offer choices to people in a computerized context. At the present, the choices on offer within these systems are constrained to the logic of the person or persons programming these algorithms and developing that AI logic. These programs are created both by people of a specific gender for the most part (males), in particular kinds of industries and research groups (computer and technology), in specific geographic locales (Silicon Valley and other tech centers), and contain within them particular “baked-in” biases and assumptions based on the limitations and viewpoints of those creating them. As such, out of the gate, these efforts do not represent society writ large nor the individuals within them in any global context. This is worrying. We are already seeing examples of these processes not taking into consideration children, women, minorities, and older workers in terms of even basic hiring talent to create AI. As such, how can these algorithms, this AI, at the most basic level, be representative for any type of population other than its own creators?

    A really challenging and provocative point of view on the dangers of AI being (seemingly) created by mostly male mostly Silicon Valley types, with mostly the same kinds of backgrounds. 

    At a minimum for folks working on and thinking of implementing AI solutions in the HR space that will impact incredibly important life-impacting decisions like who should get hired for a job, we owe it to those who are going to be effected by these AIs to ask a few basic questions.

    Like, is the team developing the AI representative of a wide range of perspectives, backgrounds, nationalities, races, and gender balanced?

    Or what internal QA mechanisms have been put into place to protect against the kinds of human biases that Applin describes from seeping into the AI's own 'thought' processes?

    And finally, does the AI take into account differences in cultures, societies, national or local identities that us humans seem to be able to grasp pretty easily, but an AI can have a difficult time comprehending?

    Again, I encourage anyone at any level interested in AI in HR to think about these questions and more as we continue to chase more and 'better' ways to make the organization's decisions and interactions with people more accurate, efficient, effective - and let's hope - more equitable.