Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
  • Contact Me

    This form will allow you to send a secure email to Steve
  • Your Name *
  • Your Email *
  • Subject *
  • Message *

free counters

Twitter Feed

Entries in automation (28)

Thursday
Sep152016

Maybe automation will hit managers as hard as staff

Super (long) read from over the weekend on the FT.com site titled 'When Your Boss is an Algorithm' that takes a really deep and thoughtful look at the challenges, pain, and potential of automation and algorithms in work and workplaces.

While the piece hits many familiar themes that have been covered before in the ongoing discussion and debate about the cost/benefits of increased automation for front line workers, (Uber and the like largely controlling their workers while still insisting they are independent contractors, the likelihood of reduced wage pressure that arises from increased scheduling efficiency, and how the 'gig economy', just like every other economy before it, seems to create winners and losers both), there was one really interesting passage in the piece about how a particular form of algorithm might just impact managers as much if not more than workers.

Here's the excerpt of interest from the FT.com piece, then some comments from me after the quote:

The next frontier for algorithmic management is the traditional service sector, tackling retailers and restaurants.

Percolata is one of the Silicon Valley companies trying to make this happen. The technology business has about 40 retail chains as clients, including Uniqlo and 7-Eleven. It installs sensors in shops that measure the volume and type of customers flowing in and out, combines that with data on the amount of sales per employee, and calculates what it describes as the “true productivity” of a shop worker: a measure it calls “shopper yield”, or sales divided by traffic.

Percolata provides management with a list of employees ranked from lowest to highest by shopper yield. Its algorithm builds profiles on each employee — when do they perform well? When do they perform badly? It learns whether some people do better when paired with certain colleagues, and worse when paired with others. It uses weather, online traffic and other signals to forecast customer footfall in advance. Then it creates a schedule with the optimal mix of workers to maximise sales for every 15-minute slot of the day. Managers press a button and the schedule publishes to employees’ personal smartphones. People with the highest shopper yields are usually given more hours. Some store managers print out the leaderboard and post it in the break room. “It creates this competitive spirit — if I want more hours, I need to step it up a bit,” explains Greg Tanaka, Percolata’s 42-year-old founder.

The company runs “twin study” tests where it takes two very similar stores and only implements the system in one of them. The data so far suggest the algorithm can boost sales by 10-30 per cent, Tanaka says. “What’s ironic is we’re not automating the sales associates’ jobs per se, but we’re automating the manager’s job, and [our algorithm] can actually do it better than them.”

The last sentence in bold is the key bit I think. 

If the combination of sensor data, sales data, and scheduling and employee information when passed through the software's algorithm can produce a staffing/scheduling plan that is from 10% - 30% better (in terms of sales), than what even an experienced manager can conjure himself or herself, then the argument to replace at least some 'management' with said algorithm is quite compelling. And it is a notable outlier in these kinds of 'automation is taking our jobs' stories that usually focus on the people holding the jobs that 'seem' more easily automated, the ones that are repetitive, involve low levels of decision making, and require skills that even simple technology can master.

Crafting the 'optimal' schedule for a retail location seems to require plenty managerial skills and understanding of the business and its goals. And at least a decent understanding of the personalities, needs, wants, and foibles of the actual people whose names are being written on the schedule.

It seems like algorithms from companies like Percolata are making significant advances, at least on the first set of criteria, that include predicting traffic, estimating yield, and devising the 'best' staffing plan, (at least on paper). My suspicion is the algorithm is not quite ready to really deeply understand the latter set of issues, the ones that are, you know, more 'human' in nature.

Or said differently, it is unlikely the algorithm will be able to predict a drop in productivity due to issues an employee may be having outside of work or adequately assess the importance to a good employee of the need to schedule around a second job or some other responsibilities.

There is probably a long way to go for algorithms to completely take over these kinds of management tasks, you know, the ones where actually talking to people is needed to reach solutions.

But when/if all the workers are automated away themselves? Well, then that is a different story entirely. 

Monday
Feb012016

Intelligent Technology

Because my life is much, much less interesting than yours, I am spending my Sunday night doing two things: Watching NBA basketball and reading this - The Accenture Technology Vision 2016 report. 

There is some really interesting information, research, and conclusions about the most important tech trends for the coming 3 - 5 years in the Accenture report, as well as a (probably unintentional) nod to my friends over at Ultimate Software as their slogan 'People First', is literally all over the Accenture report.

Accenture identifies 5 big themes in their technology vision for organizations, and there is one in particular, actually Trend #1, 'Intelligent Automation', that I was most interested in, and wanted to explore a little bit. A few weeks ago I posted my 'What HR should be talking about in 2016' piece, and in that piece, (written over the holidays and before I became aware of the Accenture report), I had this to say about 'Intelligent Technology' - pretty much the same thing as 'Intelligent Automation':

But this year, I hope that HR and HR tech expands not just the capability but the conversation in this area just a bit further, into something more akin to a kind of 'intelligent' set of tools and workflows that will help HR, managers, and employees complete processes, tasks, and hopefully allow them to make better decisions. This technology would not just predict the likelihood of a potential outcome, but would 'learn' from usage patterns, history, preferences, and more about what you (the employee) should do next, given a set of data and process conditions. That could mean surfacing the 'right' learning content when you get assigned to a new project, suggesting you make an internal connection with a specific colleague when you run a search in the corporate knowledge base for a specific topic, or if you are a manger, provide you intelligent recommendations about how to handle coaching conversations with your team members, adapted to their individual profiles and preferences. 

Pretty heady stuff, right? I spent at least 20 minutes on that post. For real.

Now let's take a look at the above-mentioned Accenture Technology Vision 2016 report and take a look at a bit of what they have to say about 'Intelligent Automation':

On the surface it may appear to be a simple transfer of tasks from man to machine. But look a little closer. The real power of intelligent automation lies in its ability to fundamentally change traditional ways of operating, for businesses and individuals. These machines offer strengths and capabilities (scale, speed, and the ability to cut through complexity) that are different from—but crucially complementary to—human skills. And their increasing sophistication is invigorating the workplace, changing the rules of what’s possible so that people and their new digital co-workers can together do things differently. And do different things. 

Machines and artificial intelligence will be the newest recruits to the workforce, bringing new skills to help people do new jobs, and reinventing what’s possible. 

Although the two pull quotes are not exactly the same, mine is kind of narrow, and talks about some HR tech-specific use cases while Accenture is talking really big picture kinds of things, at their core they are really talking about the same things. Technology, automation, and intelligent solutions that will do what machines can do best, (collect, analyze, and synthesize large data sets), and which will in the most effective organizations combine with human intelligence, experience, and social understanding to lead to the most effective outcomes.

I have to admit is was pretty cool to see the Accenture report this weekend and read that Intelligent Automation/Technology was featured so prominently in their take on 2016 as it was on my, HR-centric take from the beginning of the year. It feels kind of validating in a way. Now both Accenture and I could be wrong about this I suppose, but at least I don't feel crazy for positing the idea.

Ok, enough, the Knicks are about to start. Check out the Accenture Technology Vision 2016 report for more information on this, and after you have checked it out, send a note to your HR Tech provider to see what, if anything they are working on towards a future of 'Intelligent Technology'.

Have a great week! 

Friday
May292015

Will you be replaced by a robot? Use this nifty tool to find out

Will you or your job be replaced by a robot, an algorithm, or some other type of automation technology?

Of course it will!

The question should really be 'When?' not 'If?'

But for something fun to on a Friday, head over to the NPR Planet Money site and take spin on their interactive tool that uses data from a University of Oxford study entitled “The Future of Employment: How Susceptible Are Jobs to Computerization?”, and lets you see just how likely your job will be automated away in the near future.

Here is what the tool says about 'Cashiers', one of the most likely jobs to disappear in the next 20 years or so.

As you can tell from the charts, the likelihood of a given job becoming the domain of robots is influenced by four factors: the need to conjure up clever ideas and solutions, the amount of social interaction needed in the job, the space the job requires (robots are still not great at navigating tight spaces), and the negotiation skills needed.

Luckily for many of us, jobs that fall in the 'management' domain still seem (reasonably) safe for now.

Go have some fun on a Friday, and check out your own odds and see if you should be considering a career move (before it's too late).

Have a great weekend!

Monday
Apr272015

VIDEO: The project is called 'Replacing humans with robots'

Directing you to a super-interesting short (about 5 minutes or so) video produced by the New York Times as the first installment of a series they call 'Robotica'. In the video, we see more about the growth, challenges, and worker impact of the surge in adoption of industrial robots in Chinese manufacturing. Take a few minutes to watch the piece, (embedded below, Email and RSS subscribers will have to click through), and then some comments from me after the clip.

Really interesting stuff I think, and for me, very instructive as in 5 minutes it hits many of the big picture issues associated with the increasing automation of work and the impacts this will have on human workers.

1. At least in this Chinese province, the goals of this program are extremely clear - 'Replacing human workers with robots.' While the motivations for this stated goal might be specific to this region, I think it would be foolish to think that this phenomenon and executive attitude isn't much more common, and not just in China. CEOs everywhere are going to be intrigued and in pursuit of what increased automation promises - lower costs, increased consistency and quality, and a predictable labor supply.

2. The video does a nice job of showing the likely mixed or divergent impact of increased automation on the front-line workers that are usually most effected. While one (hand-picked by the factory leaders) employee waxes happily about how the robots are making his job easier and happier, another talks frankly about his (and other's) inability to easily transition from manual, repetitive work that is replaced by robot workers, to higher value added or creative and 'human' work. Whether in China or in Indianapolis, no low skilled worker can suddenly become a high-skilled or creative worker overnight. 

3. The video alludes to the potential, one day, for robots to actually manufacture the robots themselves, even if that is not yet happening today. This notion, that automated technologies will largely build more of themselves is one of the key differences from modern, robotic-type automation than in previous technological breakthroughs. Henry Ford's Model A didn't drive itself, (or build itself). Telephones didn't make calls for you. Personal computers needed LOTS of people entering data into them in order to get anything useful back out from them. But robots building more robots to replace more people? That sounds a little scary.

I will sign off here, take a look at the video if you can spare a few minutes today and let me know what you think in the comments below. Or have your robot assistant watch it for you.

Have a great week!

Tuesday
Feb242015

On trusting algorithms, even when they make mistakes

Some really interesting research from the University of Pennsylvania on our (people's) tendency to lose faith and trust in data forecasting algorithms (or more generally, advanced forms of smart automation), more quickly than we lose faith in other human's capabilities (and our own capabilities), after observing even small errors from the algorithm, and even when seeing evidence that relative to human forecasters, the algorithms are still superior.

From the abstract of Algorithm Aversion: People Erroneously Avoid Algorithms After Seeting Them Err:

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

Let's unpack that some. In the research conducted at Penn, the authors showed that even when given evidence of a statistical algorithm's overall superior performance at predicting a specific outcome (in the paper it was the likelihood of success of MBA program applicants that the humans and the algorithm attempted to predict), most people lost faith and trust in the algorithm, and reverted to their prior, inferior predictive abilities. And in the study, the participants were incentivized to pick the 'best' method of prediction: They were rewarded with a monetary bonus for making the right choice. 

But still, and consistently, the human participants more quickly lost and faith and trust in the algorithm, even when logic suggested they should have selected it over their (and other people's) predictive abilities.

Why is this a problem, this algorithm aversion?

Because while algorithms are proving to be superior at prediction across a wide range of use cases and domains, people can be slow to adopt them. Essentially, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms, because people are more likely to abandon an algorithm than a human judge for making the same mistake.

What might this mean for you in HR/Talent?

As more HR and related processes, functions, and decisions become 'data-driven', it is likely that sometimes, the algorithms we adopt to help make decisions will make mistakes. 

That 'pre-hire' assessment tool will tell you to hire someone who doesn't actually end up beign a good employee.

The 'flight risk' formula will fail to flag an important executive as a risk before they suddenly quit, and head to a competitor.

The statistical model will tell you to raise wages for a subset of workers but after you do, you won't see a corresponding rise in output.

That kind of thing. And once these 'errors' become known, you and your leaders will likely want to stop trusting the data and the algorithms.

What the Penn researchers are saying is that we have much less tolerance for the algorithm's mistakes than we do for our own mistakes. And maintaining that attitude in a world where the algorithms are only getting better, is, well, a mistake in itself.

The study is here, and it is pretty interesting, I recommend it if you are interested in making your organization more data-driven.

Happy Tuesday.