Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed

    Entries in Technology (242)

    Monday
    Mar302015

    UPDATE: The Microsoft Band and the Future of Wearables at Work

    Do you wear a fitness tracker like a Fitbit or a Jawbone? Or maybe you are planning to jump on the Apple Watch fanboy train in a few weeks and take advantage of that device's ability to track your activity. Lot's of folks are keeping closer track of their workouts and activity today.

    A few months back Microsoft launched its first entry into the wearables market with the Microsoft Band, a wearable tracker that possesses a variety of sensors including a microphone, a GPS location sensor, motion sensors, an optical sensor that measures heart rate, a sensor that tracks skin conductance, which can reveal levels of stress, and even a UV sensor to calculate sun exposure, delivered in a black bracelet with a rectangular touch screen.

    At that time, your humble blogger, (me), shared some thoughts about why this particular wearable smart device could be the one that has the greatest potential for near-term impact and relevance to work, workplaces, and employees. Namely, because Microsoft has such a choke hold on most organization's email, calendaring, and document management that it would be both natural and powerful for a Microsoft wearable to be integrated with these existing and traditional workplace tools.

    You can read my entire take here, but at the risk of getting too meta and quoting myself, below is the gist of my argument back in October 2014:

    I can think of a couple of really compelling use cases for this kind of integration right off the top. 

    One - how work itself effects employee health. Does someone's heart start racing in every staff meeting? Do they begin to get twitchy when called upon to present to a group? Does a certain interaction with a colleague result in three nights of poor sleep? And then what can organizations then do to better understand and potentially align individuals with projects and team members that can aid their ability to perform, while not making them crazy? How do schedules, (and in particular over scheduling), impact employee health and activity? Do we need to be more mindful of how overworked and over scheduled many of our people are?

    Two - Insights into who in the organization inspires, challenges, and lifts people up, and who serves as essentially the corporate buzzkill? Imaging a meeting with 10 people inside, all wearing the MS Band. One person dominates the meeting, maybe it is the boss, and immediately after the other 9 people begin to show signals of nervousness, irritability, or even lethargy. Maybe email and collaboration patterns in the team begin to show signs of changing as well. Perhaps some members of the team skip their normal workouts for a day or two in the aftermath. Maybe some folks don't even turn up the next day. 

    Clever stuff, right? Why bring that back up again today? Well, check the comments from a recent piece from a few days ago that was posted on the MIT Technology Review site - Microsoft's Wristband Would Like To BE Your Life Coach:

    During a recent interview at Microsoft’s Redmond, Washington, headquarters, Matt Barlow, general manager of marketing for new devices, said the company is investigating the kinds of insights it can share with users by matching up biometric data with other sources of information like their calendar or contacts to show things like which events or people may stress them out.

    In the coming months, the Microsoft Health app is poised to gain the ability to compare calendar or contact information with your physical state as measured by the band—your heart rate or skin conductance level, for instance—so the app could nudge you with detailed observations about how those things might relate. For instance, the app might send you an alert like, “I noticed you have a meeting with Susan tomorrow, and last time you met with her your heart rate went up 20 beats per minute and stayed elevated for an hour. How about trying this deep-breathing exercise that you can use with the Band?”

    Initially, these kinds of scenarios are expected to become possible through an integration with Microsoft Office services, though over time it may branch out to include other services as well.

    Hey - the Microsoft dude is essentially touting the same kinds of capability and interesting workplace data integrations that little old me talked about in October. But not to say I told you so...

    But the real point of resurfacing the old post and topic was just to remind you that even though wearables and fitness trackers have been around for a while now, we really are just still in the first inning of a long game. Trackers and biological/physiological sensors won't really start impacting the way work gets done until they actually are integrated with the tools of work - email, calendars, meetings, etc.

    Stay tuned...

    Have a great week!

    Monday
    Mar232015

    PODCAST - #HRHappyHour 207 - CHRO Corner: Laurie Zaucha, Paychex

    HR Happy Hour 207 - The CHRO Corner with Laurie Zaucha, Paychex

    Recorded Friday March 20, 2015

    LISTEN HERE

    Hosts: Steve BoeseTrish McFarlane

    Guest: Laurie ZauchaPaychex

    This week on the show, the HR Happy Hour launched a new series, the CHRO Corner, that will feature the most interesting and influential leaders in Human Resources today. Our first guest in this new series is Laurie Zaucha, VP of HR and Organizational Development at Paychex, a leading provider of HR software solutions and services having over 500K customers and upwards of 13,000 employees.

    On the show, Laurie shared her insights on the role of technology in the modern HR organization, what HR leaders should be considering when evaluating technology, how Paychex has adopted several innovative and collaborative programs for candidate attraction as well as internal employee engagement, and finally some thoughts on what are some of the most important focus areas for the HR leaders of the future. Laurie is one of the most progressive HR leaders in the industry, and she shared some amazing insights on leading HR in the modern organization.

    Additionally, Laurie talked about moving and shaping the culture of an organization, Steve tried to sound (reasonably) intelligent interviewing his former boss Laurie, and we all realized once again the benefits of post-production editing.

    You can listen to the show on the show page here, or using the widget player below:

    Check Out Business Podcasts at Blog Talk Radio with Steve Boese Trish McFarlane on BlogTalkRadio
     

     

    And of course you can listen to and subscribe to the HR Happy Hour Show on iTunes, or via your favorite podcast app. Just search for 'HR Happy Hour' to download and subscribe to the show and you will never miss a new episode.

    This was a fun and interesting show with one of the most innovative HR leaders in the technology industry. 

    Thanks to Laurie and everyone at Paychex for being part of the HR Happy Hour fun!

    Monday
    Mar092015

    Team PowerPoint vs. Team Excel

    What would you say is the preferred tool or mechanism for creating, sharing, and socializing information in your organization that is used to generate discussion and ultimately, decisions?

    While many of us (sadly) would probably default to 'Email' as the technology of choice, even heavy email cultures rely on 'real' office productivity applications for work products and communicating information. Excel and PowerPoint, assuredly, are two of the most common applications in use across organizations of all types. But which one of these two applications tends to dominate how business information and data are documented and shared can reveal plenty about how decisions are made and what kind of organizational culture prevails.

    Check the below excerpt from a recent piece on Digitopoly, a review of research into how competing teams at NASA (Team PowerPoint and Team Excel), created and shared data and information on robot technology used for experiments on space projects:

    On Team Excel, the robot has a number of instruments but separate teams manage and have property rights over those instruments. The structure is hierarchical and the various assignments the instruments are given are mapped out in Excel. By contrast on Team PowerPoint, no one team owns an instrument. Instead, all decisions regarding, say, where to position the robot are made collectively in a meeting. The meetings are centered around PowerPoint presentations that focus on qualitative trade-offs from making one decision rather than another. Then decisions are taken using a consensus approach — literally checking if everyone is “happy.”

    What is fascinating about this is that the type of data collected by each team is very different. On Team Excel where each instrument is controlled and specialised to its task, the data from them is very complete and comprehensive on that specific thing — say, light readings, infrared etc. On Team PowerPoint, there are big data gaps for each instrument but there appear to be more comprehensive deep analyses of particular phenomenon where all of the instruments can oriented towards the measurement of a common thing. This is a classic trade-off between specialised knowledge and deep knowledge. What is extraordinary is that they bake the trade-off into their organisational structure and also decision-making tools — literally emphasizing different apps in Microsoft Office.

    We probably don't consciously think too much about how the technology and tools choices we make can effect how the organization actually functions, what particular approaches and skills tend to dominate, and even what gets recognized and rewarded. In the example from the Digitopoly piece, an argument is made that both of these approaches, Team Excel with its focus on individual accountability and control, and Team PowerPoint that relied much more on shared accountability and the 'big picture', are needed and have value.

    Where we get into trouble, I think, is when one type of technology, say PowerPoint, becomes dominant or the de facto method in an organization for communicating information and as a decision support tool. It is by its nature, shallow, and it assumes that viewers and readers understand the details and deeper contexts about the subject matter that is typically just about impossible to convey in a slide deck.

    Similar arguments can be made on cultures where 95% of communication is over email, or tied up in impossibly complex Excel workbooks. 

    We often choose the easy or expected technology solution out of habit, or out of a kind of cultural allegiance. It is fascinating how these technology choices can impact much more than we think.

    Team Excel. Team PowerPoint. That really shouldn't be the choice. Team 'Right tool for the job' is. Choose wisely.

    Have a great week!

    Tuesday
    Feb242015

    On trusting algorithms, even when they make mistakes

    Some really interesting research from the University of Pennsylvania on our (people's) tendency to lose faith and trust in data forecasting algorithms (or more generally, advanced forms of smart automation), more quickly than we lose faith in other human's capabilities (and our own capabilities), after observing even small errors from the algorithm, and even when seeing evidence that relative to human forecasters, the algorithms are still superior.

    From the abstract of Algorithm Aversion: People Erroneously Avoid Algorithms After Seeting Them Err:

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

    Let's unpack that some. In the research conducted at Penn, the authors showed that even when given evidence of a statistical algorithm's overall superior performance at predicting a specific outcome (in the paper it was the likelihood of success of MBA program applicants that the humans and the algorithm attempted to predict), most people lost faith and trust in the algorithm, and reverted to their prior, inferior predictive abilities. And in the study, the participants were incentivized to pick the 'best' method of prediction: They were rewarded with a monetary bonus for making the right choice. 

    But still, and consistently, the human participants more quickly lost and faith and trust in the algorithm, even when logic suggested they should have selected it over their (and other people's) predictive abilities.

    Why is this a problem, this algorithm aversion?

    Because while algorithms are proving to be superior at prediction across a wide range of use cases and domains, people can be slow to adopt them. Essentially, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms, because people are more likely to abandon an algorithm than a human judge for making the same mistake.

    What might this mean for you in HR/Talent?

    As more HR and related processes, functions, and decisions become 'data-driven', it is likely that sometimes, the algorithms we adopt to help make decisions will make mistakes. 

    That 'pre-hire' assessment tool will tell you to hire someone who doesn't actually end up beign a good employee.

    The 'flight risk' formula will fail to flag an important executive as a risk before they suddenly quit, and head to a competitor.

    The statistical model will tell you to raise wages for a subset of workers but after you do, you won't see a corresponding rise in output.

    That kind of thing. And once these 'errors' become known, you and your leaders will likely want to stop trusting the data and the algorithms.

    What the Penn researchers are saying is that we have much less tolerance for the algorithm's mistakes than we do for our own mistakes. And maintaining that attitude in a world where the algorithms are only getting better, is, well, a mistake in itself.

    The study is here, and it is pretty interesting, I recommend it if you are interested in making your organization more data-driven.

    Happy Tuesday.

    Thursday
    Feb192015

    A feature that Email should steal from the DMV

    In New York State, and I suspect in other places as well, when you visit a Department of Motor Vehicle (DMV) office to get a new license, register your sailing vessel, or try to convince the nice bureaucrats that you did in fact pay those old parking ticket fines, there is generally a two-step process for obtaining services.

    You first enter the office and wait in line to be triaged by a DMV rep, and once he/she determines the nature of your inquiry, you receive a little paper ticket by which you are assigned a customer number, and an estimated waiting time until you will be called by the next DMV agent. You then commence waiting until your number is announced and you can complete your business. 

    That little bit of information, the estimated wait time, is the aspect of the DMV experience that I think has tons of potential for in other areas, most notably in Email communications. The DMV estimates your wait time, (I imagine), in a really simplistic manner. It is a function of the number of customers waiting ahead of you, the number of DMV agents available, and the average transaction time for each customer to be served. Simple math, and probably is pretty accurate most of the time.

    The Email version of the 'Estimated Wait Time' function would be used to auto-reply to every (or selected) incoming email messages with a 'Estimated Response Time' that would provide the emailer with information about how long they should expect to wait before receiving a reply. 

    How would this work, i.e., what would the 'Estimated Response Time' algorithm need to take into account? Probably, and at least the following data points.

    1. The relationship between the sender and the recipient - how frequently emails are exchanged, how recent was the last exchange, and what has been the typical response time to this sender in the past

    2. The volume of email needing action/reply in the recipient's inbox at the time the email is received, and how that volume level has impacted response times in the past

    3. The recipient's calendar appointments (most email and calendar services are shared/linked), for the next 1, 3, 12, 24, etc. hours. Is the recipient in meetings all day? Going on vacation tomorrow? About to get on a cross-country flight in two hours?

    4. The subject matter of the email, (parsed for keywords, topics mentioned in the message, attachments, etc.)

    5. Whether the recipient is in the 'To' field or in the 'CC' field, whether there are other people in the 'To' and 'CC' fields, and the relationship of the recipient to anyone else receiving the email

    And probably a few more data points I am not smart enough to think of in the 20 minutes or so I have been writing this.

    The point?

    That a smart algorithm, even a 'dumb' one like at the DMV, could go a long way to help manage communications, workflow, and to properly set expectations. When you send someone an email you (usually) have no idea how many other emails they just received that hour, what their schedule looks like, the looming deadlines they might be facing, and the 12,458 other things that might influence when/if they can respond to your message. But with enough data, and the ability to learn over time, the 'Expected Response Time' algorithm would let you know as a sender what you really need to know: whether and when you might hear back.

    Let's just hope once the algorithm is in place, we all don't get too many "Expected Response Time = NEVER" replies.

    Now please Google, or Microsoft, or IBM get to work on this.