Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.

    free counters

    Twitter Feed

    Entries in algorithms (2)

    Thursday
    Sep152016

    Maybe automation will hit managers as hard as staff

    Super (long) read from over the weekend on the FT.com site titled 'When Your Boss is an Algorithm' that takes a really deep and thoughtful look at the challenges, pain, and potential of automation and algorithms in work and workplaces.

    While the piece hits many familiar themes that have been covered before in the ongoing discussion and debate about the cost/benefits of increased automation for front line workers, (Uber and the like largely controlling their workers while still insisting they are independent contractors, the likelihood of reduced wage pressure that arises from increased scheduling efficiency, and how the 'gig economy', just like every other economy before it, seems to create winners and losers both), there was one really interesting passage in the piece about how a particular form of algorithm might just impact managers as much if not more than workers.

    Here's the excerpt of interest from the FT.com piece, then some comments from me after the quote:

    The next frontier for algorithmic management is the traditional service sector, tackling retailers and restaurants.

    Percolata is one of the Silicon Valley companies trying to make this happen. The technology business has about 40 retail chains as clients, including Uniqlo and 7-Eleven. It installs sensors in shops that measure the volume and type of customers flowing in and out, combines that with data on the amount of sales per employee, and calculates what it describes as the “true productivity” of a shop worker: a measure it calls “shopper yield”, or sales divided by traffic.

    Percolata provides management with a list of employees ranked from lowest to highest by shopper yield. Its algorithm builds profiles on each employee — when do they perform well? When do they perform badly? It learns whether some people do better when paired with certain colleagues, and worse when paired with others. It uses weather, online traffic and other signals to forecast customer footfall in advance. Then it creates a schedule with the optimal mix of workers to maximise sales for every 15-minute slot of the day. Managers press a button and the schedule publishes to employees’ personal smartphones. People with the highest shopper yields are usually given more hours. Some store managers print out the leaderboard and post it in the break room. “It creates this competitive spirit — if I want more hours, I need to step it up a bit,” explains Greg Tanaka, Percolata’s 42-year-old founder.

    The company runs “twin study” tests where it takes two very similar stores and only implements the system in one of them. The data so far suggest the algorithm can boost sales by 10-30 per cent, Tanaka says. “What’s ironic is we’re not automating the sales associates’ jobs per se, but we’re automating the manager’s job, and [our algorithm] can actually do it better than them.”

    The last sentence in bold is the key bit I think. 

    If the combination of sensor data, sales data, and scheduling and employee information when passed through the software's algorithm can produce a staffing/scheduling plan that is from 10% - 30% better (in terms of sales), than what even an experienced manager can conjure himself or herself, then the argument to replace at least some 'management' with said algorithm is quite compelling. And it is a notable outlier in these kinds of 'automation is taking our jobs' stories that usually focus on the people holding the jobs that 'seem' more easily automated, the ones that are repetitive, involve low levels of decision making, and require skills that even simple technology can master.

    Crafting the 'optimal' schedule for a retail location seems to require plenty managerial skills and understanding of the business and its goals. And at least a decent understanding of the personalities, needs, wants, and foibles of the actual people whose names are being written on the schedule.

    It seems like algorithms from companies like Percolata are making significant advances, at least on the first set of criteria, that include predicting traffic, estimating yield, and devising the 'best' staffing plan, (at least on paper). My suspicion is the algorithm is not quite ready to really deeply understand the latter set of issues, the ones that are, you know, more 'human' in nature.

    Or said differently, it is unlikely the algorithm will be able to predict a drop in productivity due to issues an employee may be having outside of work or adequately assess the importance to a good employee of the need to schedule around a second job or some other responsibilities.

    There is probably a long way to go for algorithms to completely take over these kinds of management tasks, you know, the ones where actually talking to people is needed to reach solutions.

    But when/if all the workers are automated away themselves? Well, then that is a different story entirely. 

    Tuesday
    Feb242015

    On trusting algorithms, even when they make mistakes

    Some really interesting research from the University of Pennsylvania on our (people's) tendency to lose faith and trust in data forecasting algorithms (or more generally, advanced forms of smart automation), more quickly than we lose faith in other human's capabilities (and our own capabilities), after observing even small errors from the algorithm, and even when seeing evidence that relative to human forecasters, the algorithms are still superior.

    From the abstract of Algorithm Aversion: People Erroneously Avoid Algorithms After Seeting Them Err:

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

    Let's unpack that some. In the research conducted at Penn, the authors showed that even when given evidence of a statistical algorithm's overall superior performance at predicting a specific outcome (in the paper it was the likelihood of success of MBA program applicants that the humans and the algorithm attempted to predict), most people lost faith and trust in the algorithm, and reverted to their prior, inferior predictive abilities. And in the study, the participants were incentivized to pick the 'best' method of prediction: They were rewarded with a monetary bonus for making the right choice. 

    But still, and consistently, the human participants more quickly lost and faith and trust in the algorithm, even when logic suggested they should have selected it over their (and other people's) predictive abilities.

    Why is this a problem, this algorithm aversion?

    Because while algorithms are proving to be superior at prediction across a wide range of use cases and domains, people can be slow to adopt them. Essentially, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms, because people are more likely to abandon an algorithm than a human judge for making the same mistake.

    What might this mean for you in HR/Talent?

    As more HR and related processes, functions, and decisions become 'data-driven', it is likely that sometimes, the algorithms we adopt to help make decisions will make mistakes. 

    That 'pre-hire' assessment tool will tell you to hire someone who doesn't actually end up beign a good employee.

    The 'flight risk' formula will fail to flag an important executive as a risk before they suddenly quit, and head to a competitor.

    The statistical model will tell you to raise wages for a subset of workers but after you do, you won't see a corresponding rise in output.

    That kind of thing. And once these 'errors' become known, you and your leaders will likely want to stop trusting the data and the algorithms.

    What the Penn researchers are saying is that we have much less tolerance for the algorithm's mistakes than we do for our own mistakes. And maintaining that attitude in a world where the algorithms are only getting better, is, well, a mistake in itself.

    The study is here, and it is pretty interesting, I recommend it if you are interested in making your organization more data-driven.

    Happy Tuesday.