Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
  • Contact Me

    This form will allow you to send a secure email to Steve
  • Your Name *
  • Your Email *
  • Subject *
  • Message *

free counters

Twitter Feed

Entries in Technology (338)

Monday
Dec042017

Alexa, what do I need to get done today?

High, probably at the top of the list of 'Cool things I acquired in 2017 list' is the Amazon Echo, powered by Amazon's 'Alexa' platform.

I talk to Alexa every single day. In fact, I probably spent more time with Alexa than anyone else this year. I probably ought to think about what that means. Anyway, back to the point. The single feature I use and enjoy the most is the 'Flash Briefing' or short news and information update that can be configured to have Alexa (via a slew of independently created 'skills' or sources), to give me a tailored, personalized update of news, sports, weather, meetings, and other updates that are meaningful to me. I probably use this feature two or three times a day. I know, I am weird. But I have become so hooked and almost dependent on Alexa that I even bought a second Echo device for the second floor of my house, so that Alexa and I would never be too far apart. Wow, that is really weird. But (again) back to the point.

Last week Amazon announced the formal launch of the 'Alexa for Business' platform, that will enable organizations who place Alexa-enabled Echo devices in their offices, lobbies, and conference rooms to centrally administer these devices, provision user access to these devices, enable both public and private/custom skills to these devices, and finally, (and perhaps most interestingly), allow employees to access private/custom/proprietary skills on their personal Echo devices at home.

Think about walking into a conference room and simply stating 'Alexa, start the meeting' to have Alexa fire up the connected A/V in the room, call the conference bridge number, provide the authentication to the conference call provider, and send out a notification to everyone on the meeting invite that the call/meeting has started. Really cool, (especially if you are as sick as me as having to enter about 27 numbers and codes to kick off a conference call), and according to the early Alexa for Business release documentation, really easy to set up.

In addition to the meeting management stuff, Alexa for Business will be able to perform in a business/office setting the same kinds of tasks that millions of people are using Alexa for at home - controlling smart lights and equipment, getting Flash Briefings, setting reminders, managing To-Do lists, and even performing basic calendaring. I ask Alexa 'What's my next meeting? all the time.

  

These use cases are all pretty cool, and are easily translated to workplace contexts as they are all simple and pretty straightforward. But do not underestimate how cool it would be to have Alexa lay out your day, your meetings, and your important 'To-dos' in a simple summary at the start of the day.

But what is potentially more interesting is that Amazon has created a Skills developer kit and a set of APIs to enable solution providers, (like your HRIS provider), and individual organizations to create custom skills to enable Alexa-type access to things like sales reports, employee schedules, business travel itineraries, or even and update on the slate of candidates you have to interview for your open position on that day.

It is not at all a stretch to expect that very soon, some if not most of the major HCM solution providers will begin to offer at least some support for Alexa for Business skills, as (and this is just like we saw with smartphones and tablets), as more and more employees adopt and begin to use these devices at home, they will want to use them for work. And also 'at home / for work' if that makes sense.

If I were an HR/Talent pro thinking about or evaluating some new HR Tech tools I would definitely ask the providers that are vying for my business what/if any plans they have to incorporate Alexa, or voice UX more generally, into their technology and supported processes. 

Because it is only a matter of time until your CEO or your Head of Sales comes to you to ask 'Why can't I do, (insert something they like/need to do here) on my Echo?'

Happy Monday. Have a great week!

Thursday
Nov302017

It doesn't matter if the robots aren't coming for your job, they are coming for your neighbor's job

After reading a flurry of pieces over the last few days about the progress being made in self-driving vehicle technology, I was reminded that one job category that seems likely to be highly pressured by this type of automation is commercial vehicle driving. You don't have to be a genius to realize that once Tesla (and others), get enough of their new commercial trucks into service, that Generation 2.0 of these trucks will attempt to not just eliminate diesel fuel and noxious emissions from their products - they will try to eliminate the driver too.

And you probably caught something about Amazon's newest experiments with retail stores that have no cashiers. Or maybe you have heard about fast food giants like McDonald's or Panera pushing more self-service kiosks into their locations, to reduce the need for human cashiers and order-takers. Or the hotels that are using mobile robots to deliver room service meals to their guests. And the list goes on and on.

And maybe after reading all these stories you say to yourself: "Self, these technology advancements are amazing. But good thing I am a (insert the white collar 'knowledge' job you have here) and not a truck driver or a cashier.' 

And whether or not the robots are coming sooner or later for whatever 'knowledge' job you have today is probably debatable, let's pretend for the moment in the words of Big Brother, (yes, I am fan), - 'Knowledge worker X, you are safe'. Phew. That is a relief.

But here is the thing, the kinds of jobs that are most vulnerable, most likely to be adversely impacted by automation are ones that are held by millions of people. Have a look at the chart below, from BLS data from May 2016.

 

Look closely at that list of the Top 10 'most-held' job categories in the US and think about which of them, (Clue: It is almost all of them), are going to be increasingly pressured by technology, automation, and 'self-service'.

There are about 150M people in the US labor force give or take. The Top 10 job categories in the above chart represent about 21 or 22 million workers - roughly 15% of all US workers. That is a huge number, especially considering that half a percent or a full percent moves in the unemployment rates are such big news.

The potential and the consequences of labor automation are concerns for everyone - whether or not your job is 'safe'.

And one last bit of food for thought. This issue, this challenge of automation and technology threatening jobs is also going to be a local one. Check out this chart below that shows the largest private employer for each state in the US. See any cause for concern?

When Walmart decides to move more aggressively into online, self-service, robot customer service pods, and Amazon-like efficiency in their distribution centers there will be an impact too.

But that's ok. You don't work at Walmart.

But I bet you know someone who does.

Wednesday
Nov222017

HRE Column: LinkedIn One Year Later

Once again, I offer my semi-frequent reminder and pointer for blog readers that I also write a monthly column at Human Resource Executive Online called Inside HR Tech that can be found here.

This month, I take a look back at the Microsoft acquisition of LinkedIn which (although it seems like a lot longer), only closed officially about this time last year. It has been a pretty interesting, innovative, and fascinating year for the largest professional social network. Since LinkedIn is such an important and influential technology for organizations and individual professionals alike, it seemed like a good time to reflect back on the year and to speculate a bit on what might lie ahead.

In the HRE Column, I dig a little bit into some of LinkedIn's recent product announcements, look at how the Microsoft angle is beginning to play out and how LinkedIn could evolve moving forward. I hope to have some execs from LinkedIn on an upcoming HR Happy Hour Show totalk about some of these ideas in more depth.

Having said that, here's a taste of the HRE piece titled 'Betting on LinkedIn'

I recently was invited to attend a quarterly product update from the folks at LinkedIn Talent Solutions, an online event where the product and marketing teams provide demonstrations and details about new product initiatives and capabilities that are (or are about to be) released. I get these kinds of invites from solution providers quite often, and admittedly do not usually attend -- either I am busy planning the annual HR Tech Conference or I simply don't get all that excited by incremental updates to existing platforms or solutions.

But I made an exception in this particular case and watched this most recent LinkedIn update. The reasons why were twofold: I had some extra time; and I was interested in one particular update that LinkedIn planned to share information regarding the integration of LinkedIn information with Microsoft Word in the context of a user creating a resume.

And, since Microsoft finished its $26.2-billion acquisition of LinkedIn about a year ago now, I figured it was an appropriate time to reflect on that industry development, as well as some new capabilities being added to the platform, the challenges the company faces, and what might be coming next.

On its latest product update webcast, LinkedIn showcased two new initiatives that reflect its continued need to provide value to two distinct constituencies: HR and talent-acquisition professionals; and its rank-and-file members. Each obviously have very different needs and goals.

The first enhancement for organizational users of its Talent Solutions products was a new performance summary report, which provides them with a simple but comprehensive overview of organizational activity and results on the platform. On one dashboard, HR and talent management professionals can see data such as the number of hires who were "influenced" by candidates viewing company profiles and content on LinkedIn prior to being hired; the effectiveness and response rates of candidate outreach; and most interestingly to me, the top five companies that organizations are losing and winning talent I can recall working at an organization where we were suddenly losing lots of talented sales reps over a short period of time, and had to scramble (and pull up lots of individual LinkedIn profiles) to figure out which competitors were poaching them. We would have loved to have had this information in one place.

The other new capability -- and probably the more innovative development -- was the announcement of a deeper integration of LinkedIn data with Microsoft Word. For users drafting a resume in Word, information from other LinkedIn profiles is used to help craft a resume. This Resume Assistant asks them to provide a job role of interest and then surfaces examples from LinkedIn of typical work-experience summaries and skills descriptors

Read the rest at HR Executive Online...

If you liked the piece you can sign up over at HRE to get the Inside HR Tech Column emailed to you each month. There is no cost to subscribe, in fact, I may even come over and re-surface your driveway, take your dog for a walk, rake up your leaves, and eat your leftover pumpkin pie.

Have a great day and Happy Day Before Thanksgiving!

Wednesday
Nov152017

Self-driving bus crashes, proving all buses should be self-driving

In case you missed it, a fairly significant pilot of self-driving vehicles, in this case shuttle buses, launched last week in Las Vegas. In this test, shuttle buses developed by French company Navya ARMA will carry passengers along a half-mile route in downtown Las Vegas, (that part of Vegas that most of us who go to Vegas for Conference and conventions tend to ignore). The Navya ARMA buses rely on GPS, cameras, and light-detecting sensors in order to navigate the public streets. According to reports, the year long test hopes to shuttle about 250,000 passengers up and down the Vegas streets.

Pretty cool, right?

Guess what happened in the first couple of hours after launching the self-driving pilot program?

Yep, a CRASH.

The first self-driving bus was in a minor accident within a couple of hours of the service's launch when a (human driven) delivery truck failed to stop in time and collided with the stationary shuttle bus.

According to a spokeperson from the American Automobile Association, "The truck making the delivery backed into the shuttle which was stopped. Human error causes most traffic collisions, and this was no different."

No one was hurt, the damage was minor, and the self-driving pilot program continues in Las Vegas.

Why bring this up, especially on a blog that at least pretends to be about work, HR, HR Tech, etc.?

Because these kinds of technology developments, of self-driving vehicles, robots that can sort and organize inventory in warehouses, robots that will greet and provide basic customer services in retail environments and hotels, are being developed, improved, and deployed at increasing rates and in more and more contexts.

Self-driving technology in particular, especially for commercial vehicles, is by some estimates within 10 years of becoming a mainstream technology, potentially displacing hundreds of thousands of commercial truck drivers. And as an aside, this piece describes how the trucking industry is clearly not ready for this and other technological disruptions.

This is not meant to be another, tired, 'Robots are taking our jobs' post, but rather another reminder that technology-driven disruption will continue to change the nature of work, workplaces, and even our own ideas about the role of people in work and the economy. And HR and HR tech leaders have to take a leading role in how, where, when, and why their organizations navigate these changes, as they sit directly at the intersection of people, technology, and work.

And lastly, if that Las Vegas delivery truck had been equipped with the same kinds of self-driving tech that the Nayva ARMA bus has, there is almost no chance there would have been an accident.

But it might have be fun if it happened anyway. I'd love to see two 'robot' trucks argue with each other on the side of the road about which one was the doofus who caused the accident.

Have a great day!

Wednesday
Nov082017

Looking for bias in black-box AI models

What do you do when you can't sleep?

Sometimes I watch replays of NBA games, (how about my Knicks?), and sometimes I read papers and articles that I had been meaning to get to, but for one reason or another hadn't made the time.

That is how I spent an hour or so with 'Detecting Bias in Black-Box Models Using Transparent Model Distillation', a recently published paper by researchers at Cornell, Microsoft, and Airbnb. I know, not exactly 'light' reading.

Full disclosure, I don't profess to have understood all the details and complexity of the study and research methods, but the basic premise of the research, and the problem that the researchers are looking to find a way to solve is one I do understand, and one that you should too as you think about incorporating AI technologies into workplace processes and decision support/making.

Namely, that AI technology can only be as good and as accurate as the data it’s trained on, and in many cases we end up incorporating our human biases into algorithms that have the potential to make a huge impact on people’s lives - like decisions about whom to hire and promote and reward.

In the paper, the researchers created models that mimic the ones used by some companies that created 'risk scores', the kinds of data that are used by a bank to decide whether or not to give someone a loan, or for a judicial administration to decide whether or not to give someone early parole. This first set of models is similar to the ones that these companies use themselves.

Then the researchers create a second, transparent, model that is trained on the actual outcomes that the first set of models are designed to predict - whether or not the loans were paid back and whether or not the parolee committed another crime. Importantly, these models did include data points that most of us, especially in HR, are trained to ignore - things like gender, race, and age. The researchers do this intentionally, and rather than me try to explain why that is important, read through this section of the paper where they discuss the need to assess these kinds of 'off-limits' data elements, (emphasis mine):

Sometimes we are interested in detecting bias on variables that have intentionally been excluded from the black-box model. For example, a model trained for recidivism prediction or credit scoring is probably not allowed to use race as an input to prevent the model from learning to be racially biased. Unfortunately, excluding a variable like race from the inputs does not prevent the model from learning to be biased. Racial bias in a data set is likely to be in the outcomes — the targets used for learning; removing the race input race variable does not remove the bias from the targets. If race was uncorrelated with all other variables (and combinations of variables) provided to the model as inputs, then removing the race variable would prevent the model from learning to be biased because it would not have any input variables on which to model the bias. Unfortunately, in any large, real-world data set, there is massive correlation among the high-dimensional input variables, and a model trained to predict recidivism or credit risk will learn be biased from the correlation between other input variables that must remain in the model (e.g., income, education, employment) and the excluded race variable because these other correlated variables enable the model to more accurately predict the (biased) outcome, recidivism or credit risk. Unfortunately, removing a variable like race or gender does not prevent a model from learning to be biased. Instead, removing protected variables like race or gender make it harder to detect how the model is biased because the bias is now spread in a complex way among all of the correlated variables, and also makes correcting the bias more difficult because the bias is now spread in a complex way through the model instead of being localized to the protected race or gender variables. ŒThe main benefi€t of removing a protected variable like race or gender from the input of a machine learning model is that it allows the group deploying the model to claim (incorrectly) that they model is not biased because it did not use the protected variable.

This is really interesting, if counter-intuitive to how most of us, (me for sure), would think about how to ensure that AI and algorithms that we want to deploy to evaluate data sets for a process meant to provide decision support for the 'Who should we interview for our software engineer opening? question.

I'm sure we've seen or heard about AI for HR solutions that profess to eliminate biases like the ones that have existed around gender, race, and even age from important HR processes by 'hiding' or removing the indicators of such protected and/or under-represented groups.

This study suggests that removing those indicators from the process and the design of the AI is exactly the wrong approach - and that large data sets and the AI itself can and will 'learn' to be biases anyway.

Really powerful and interesting stuff for sure.

As I said, I don't profess to get all the details of this research but I do know this. If I were evaluating an AI for HR tool for something like hiring decision support, I probably would ask these questions of a potential provider:

1. Do you include indicators of a candidate's race, gender, age, etc. in the AI/algorithms that you apply in order to produce your recommendations?

If their answer is 'No we don't include those indicators.'

2. Then, are you sure that your AI/algorithms aren't learning how to figure them out anyway, i.e., are still potentially biased against under-represented or protected groups?

Important questions to ask, I think.

Back to the study, (in case you don't slog all the way through it). The researchers did conclude that for both large AI tools they examined, (loan approvals and parole approvals), the existing models did still exhibit biases that they professed to have 'engineered' away. And chances are had the researchers trained their sights on one of the HR processes that AI is being deployed in, they would have found the same thing.

Have a great day!

Page 1 ... 5 6 7 8 9 ... 68 Next 5 Entries »