Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.

    free counters

    Twitter Feed

    Entries in decisions (6)

    Monday
    Jan142019

    More information is not always better information, or leads to better decisions

    Quick update for a busy, cold Monday in the Northeast. Over the weekend while enjoying my typical evening of NBA League Pass and catching up on some reading, I ran into this excellent piece on the Behavioural Investment blog titled 'Can More Information Lead to Worse Investment Decisions.'

    In the piece, author Joe Wiggins references a research study titled 'Effects of Amount of Information on Judgement Accuracy and Confidence' that was published in 2008 in the academic journal Organizational Behavior and Human Decision Processes, (I trust you are all up to date on your stack of these journals). Long, (really long) story short, the researchers found that having more information available that was meant to help subject make decisions had interesting and counter-intuitive effects. Essentially, more information did not increase the quality of decision making in a significant way, while at the same time increasing the subject's level of confidence in their decisions, which as we just noted, did not in fact get any better.

    Here's a simple chart from one of the experiments showing what happened to decision quality and subject confidence when more information and data about the decision was made available to subjects. 

    The data above shows how subjects in the study had to forecast a winner for a number of college football games based on sets of anonymised statistical information about the teams  The information came in blocks of 6 (so for the first trial of predictions the participant had 6 pieces of data) and after each subsequent trial of predictions they were given another block of information, up to 5 blocks (or 30 data points in total), and had to update their predictions.  Participants were asked to predict both the winner and their confidence in their judgement between 50% and 100%. The aim of the experiment was to understand how increased information impacted both accuracy and confidence in their decisions/predictions.

    Joe at Behavioural Investment sums up the results of the experiment really well:

    The contrasting impact of the additional information is stark – the accuracy of decision making is flat, decisions were little better with 30 statistics than just 6, however, participant confidence that they could select the winner increased materially and consistently.  When we come into possession of more, seemingly relevant, information our belief that we are making the right decision can be emboldened even if there is no justification for this shift in confidence levels.

    A really important reminder and a kind of a warning for any of us, say in HR in 2019, who are increasingly seeking to and are more able to gather more and more data and information to use and apply in HR and talent decision making. If more information does not always, (or maybe ever), lead to better decisions, then we need to be really much more careful how we plan to gather, process, and apply data for decision making. 

    The most basic takeaway from this kind of study is that we probably need to spend much more time thinking about what data and information is meaningful or predictive towards making a decision, rather than increasing our efforts to simply gather more and more data, from all the possible available sources, under the probably false impression that more = better.

    There are plenty of reasons why we are inclined to gather more data is we can - we might not know what information is actually relevant, so we look to simply collect data, we want to show we did a lot of research before taking a decision, or we want to be more comfortable with our decision if it is supported by more data. 

    But I think it's best to start small with our data sets we apply to decisions, take time to test if the data we already possess is meaningful and predictive before chasing more data for its own sake.

    Ok, that's it, I'm out - have a great week!

    Tuesday
    Jun122018

    Balancing data and judgment in HR decision making

    A few weeks ago I did an HR Happy Hour Show with Joshua Gans, co-author of the excellent book Prediction Machines. On the show, we talked about one of the central ideas in the book - the continuing importance of human judgment in decision making, even in an environment where advances in AI technology make predictions (essentially options) more available, numerous, and inexpensive.

    I won't go back through all the reasoning behind this conclusion, I encourage you to listen to the podcast and/or read the book for that, but I did want to point out another excellent example of how this AI and prediction combined with human judgment idea plays out in human capital management planning and decisions. A recent piece in HBR titled Research: When Retail Workers Have Stable Schedules, Sales and Productivity Go Up shares some really interesting findings about a study that aimed to find out if giving retail workers more schedule certainty and clarity would impact business results, and if so, how?

    Some back story on the idea behind the study first. As demand planning and workforce scheduling software has developed over the years, and become much more sophisticated, many retailers now have the information and ability to set and adjust worker schedules much more dynamically, and almost in real time, than they had in the past. Combining sales and store traffic estimates with workforce planning and scheduling tools that are able to match staffing levels to this demand - store managers are, for the most part, able to optimize staffing, (and therefore control labor costs), much more precisely.

    But while optimizing the staffing levels in a retail store sounds like a sound business practice, and makes the owners of the store happy (typically via reduced labor costs), it also often make the staff unhappy. In a software and AI driven staffing model, workers can find their schedules uncertain, changing from week to week, and even find themselves losing expected shifts on very short notice, sometimes less than two hours.

    The data and the AI might be 'right' when they recommend a set of staff schedules based on all the available information, but, as we will see in the research referenced in the HBR piece, the data and the AI usually fail to see and understand the impact this kind of scheduling has on the actual people that have to do the actual work.

    You really should read the whole piece on HBR, but I want to share the money quote here - what the researchers found or recommended would be the best way for a retailer to incorporate these kinds of advanced AI tools to help set retail store worker schedules:

    At the start of the study, we often heard HQ fault store managers for “emotional scheduling” — a script pushed by the purveyors of scheduling software. “In measuring customer experience and making decisions related to a labor model, retailers should rely solely on facts. Too often, changes are made because of an anecdotal or emotional response from the field,” notes a best practices guide from Kronos.

    However, our experiment shows that a hybrid approach of combining algorithms with manager intuition can lead to better staffing decisions. While our experiment provided guidelines for managers, it still allowed the managers to make the final decision on how much of the interventions to implement. The increase in sales and productivity witnessed at the Gap shows that retailers stand to benefit when they allow discretion to store managers.

    What were some of the benefits of giving managers at least some discretion over scheduling, even when the AI made different recommendations?

    When managers could give more workers more 'certain' or predictable schedules, most of them benefited from ability to predict commute times, ability to schedule things like education, child care, other jobs, and enabled them to connect more deeply with customers and co-workers. In short, they were all happier, and this tended to lead to better work performance, better customer service, and in the case of the stores studied by HBR - increased revenues and profits.

    In time, maybe the AI will learn to understand this, this nuanced, subtle, but important impact that work schedules have on workers, and how that impacts business results. But until then, it seems like it's best to let the AI make recommendations on the optimal staffing decisions, and let the managers make the final call, based on what they know about their staff, their customers, and well, human nature in general.

    Have a great day!

    Tuesday
    Jul112017

    Learn a new word: The General Theory of Second Best

    There's nothing I care more about that NBA basketball, (I promise this isn't another basketball post, but I may have to dig out a basketball analogy to make the point), with the possible exception of learning new things.

    Which is why, I think, I run the 'Learn a new word' series on the blog. I am also falling into the trap of thinking 'if this is interesting to me, then it should be interesting to people who read this blog'. After 10 years of this, I am not really sure if that is even true. But I persist.

    So here's today's 'Learn a new word' entry - The General Theory of Second Best.

    What in the heck is that?

    A decent description can be found in the Economist: (emphasis mine)

    The theory of the second-best was first laid out in a 1956 paper titled, sensibly enough, "The General Theory of the Second Best", [paid access] by Richard Lipsey and Kelvin Lancaster. Roughly put, Lipsey and Lancaster pointed out that when it comes to the theoretical conditions for an optimal allocation of resources, the absence of any of the jointly necessary conditions does not imply that the next-best allocation is secured by the presence of all the other conditions. Rather, the second-best scenario may require that other of the necessary conditions for optimality also be absent—maybe even all of them. The second-best may look starkly different than the first best.

    Let's think on that for a moment and take it back (sorry) to the basketball analogy I hinted at in the open.

    The optimal allocation of resources for say a basketball team has traditionally consisted of five different kinds of players, with different body types, playing styles, and characteristics that when assembled, would provide the team with the right balance of scoring, passing, rebounding, and defensive play that would result in winning.

    But let's say that the team can't acquire or develop one of the positions, let's say the point guard - the player who usually is charged with handling the ball, setting up his/her teammates for easy scores, and functioning as the on-court leader of the team. If this example team can't find a good enough point guard, the Theory of Second Best suggests that 'answer' to the problem isn't making sure the other four positions/roles are filled as designed and slotting in any old player as the point guard.

    The theory suggests that the 'optimal' solution, when one resource (the point guard), is missing, may be to take a completely different approach to building the team. Maybe the team looks for more 'point guard' like skills in the other positions, or maybe the team implements a different style of offense entirely to mitigate the problem.

    The real point is that once conditions appear that make the 'first best' strategy impossible to execute, that you may need to think really, really differently about what will constitute the 'second best' strategy. 

    The second best may look starkly different than the first best.

    I really dig that and hope you think about it too, once your plans in business or in life run into some challenges.

    Tuesday
    Feb162016

    There are only 5 possible reasons for any business problem - Bar Rescue edition

    Some folks who know me know that about a thousand years ago I spent a fair bit of time working in the Middle East - in Saudi Arabia to be precise. And these same folks also know that every one of my probably hundreds of stories I have told about my time in Saudi fall into only five major categories - it was really hot, we had to find gray market beer, I played rugby with a wild group of expats, we socialized with the (mostly Irish and Canadian) nurses from the local hospital, and sometimes you had to deal with some scary police/security people.

    Every story, no matter how it starts, ends up in one of those five classifications. In fact, over the years I got tired of telling, (and people got tired of listening to) the old tales, and now I just list the five categories. The details of any one event or experience don't really matter all that much anyway. But the categories are still valid.

    What made me think about this again was that over the long weekend I caught a few episodes of a marathon one of my favorite reality TV shows - Bar Rescue. If you are not familiar with the show, the basic premise is this: Veteran bar and hospitality consultant and expert Jon Taffer gets summoned to 'rescue' or help fix a bar or bar/restaurant that is failing, and possibly about to go out of business. 

    Taffer will bring in a team of experts like a master mixologist, a chef, and designers and construction crews that together help to renovate the bar, motivate and train the owners and staffs, and redesign products and processes in hopes of giving the bar a new start and (hopefully), keeping it in business.

    But what's the connection to 'Steve's boring Middle East stories?' you might be asking. 

    Well it is this: Just like my dopey stories, every major problem facing the failing business owners in Bar Rescue falls into five categories as well. Sure there may be some subtle differences in specific situations, and most of these disaster bars suffer from multiple problems, but at their canter, they are mostly, remarkably, the same.

    Every failing bar's problems fall into one of these five categories, (with some specific manifestations where I can think of some).

    1. Lack of leadership from the bar owners - shows up in a few ways on the show, my favorite are the owners that simply get trashed drunk at the bar every night and have no idea what is really happening. Other times the owners are part-time or 'hobby' owners and have other businesses or jobs that keep them from paying enough attention to the failing bar.

    2. Terrible hiring decisions - often this is the 'professional' bar manager that has no idea what he/she is doing. Also, lots of 'friends and family' hiring of people that are totally wrong for the jobs they are in or are taking advantage of their relationship with the owner to get away with doing substandard work.

    3. Lack of attention to maintenance and upkeep - these are the bars with dead fruit-flies in the bottles, accumulated grease covering everything in the kitchen, and tubs of expired and/or rotting food in the walk-in. It is actually kind of shocking what some of these failing bars have allowed to let happen - at times it even threatens the health and safety of workers and customers.

    4. Little or no understanding of the market/customers - time and time again Taffer and his team have to advise and educate the bar owners about the local neighborhood, the main drivers of potential traffic to the bar, and how the bar stacks up against the local competition. Typically in these situations, the bar owners have failed to recognize and adapt to changes - trends, preferences, and expectations of customers that are not the same as they once were back when the bar was more successful.

    5. Failure to understand the economics - this one is pretty common the show and manifests itself in a few ways. Sometimes the owners really don't know how much money they are really losing or owe. Sometimes they don't have a good grasp on the financial drivers of their business, like knowing what food or drink items are most profitable. Or they are getting fleeced by staff (or even themselves) by giving away too many free rounds of drinks and not realizing how much that is hurting the business.

    Just like my Saudi stories can be pretty easily classified, every failing bar's problems on Bar Rescue can fit into one of the above categories. And the the more interesting thing about Bar Rescue than my stories, is that these bar/business problems are pretty likely the same broad set of categories just about and business faces too.

    Issues with leadership at the top. Bad hires, poorly trained staff, people in the wrong roles. Failing to keep track of the basic elements needed for any kind of success. Not keeping up with market and business condition changes. And finally, not watching and understanding the finances. Every problem (pretty much anyway), fits into one of these buckets.

    Figure out in which one of these buckets that most of your business problems fit and you, like the Bar Rescue team, will know where to spend your time and energy making things right.

    Thursday
    Jun182015

    Learn a new word Thursday: Knightian Uncertainty

    Welcome back to the latest installment of 'Learn a new word Thursday' where I share with you some small, but hopefully interesting and relevant word or concept that since it is new to me, must be new to (many of) you as well.

    Submitted for your consideration today's word/concept: Knightian uncertainty.

    From your pals at Wikipedia:

    In economics, Knightian uncertainty is risk that is immeasurable, not possible to calculate.

    Knightian uncertainty is named after University of Chicago economist Frank Knight (1885–1972), who distinguished risk and uncertainty in his work Risk, Uncertainty, and Profit:

    "Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated.... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all."

    Did you catch the distinction laid out there, between 'risk', which can generally be measured and estimated (and therefore planned for); and 'uncertainty' of the kind that is not measurable at all, the so-called 'Knightian' uncertainty.

    For example, we might be able to assess the 'risk' of any given commercial flight arriving more than an hour late say, but we have no real ability to estimate the likelihood of any given route being profitable in say, 25 years time. 

    Let's read that again just so we are sure it sinks in, a measurable uncertainty...  is so far different from an unmeasurable one that it is not in effect an uncertainty at all.

    So the lesson here is to be a little more careful, and a little more precise when tossing about terms like 'risk' and 'uncertainty.' If you can measure it, can draw some reasonable conclusions about the likelihood of an occurrence (or failure for something to occur), and can get smarter over time from these practices, then whatever it is probably isn't 'uncertain' at all. 

    And the interesting thing from an HR and talent management perspective is that with the rise of more sophisticated technologies for assessment, competency, skill, and massive amounts of actual workforce data upon which to test our theories, more and more 'people' decisions are becoming much less uncertain and simply more risky.

    Which doesn't sound like much of an improvement until you realize that just a few years ago just about every people-related decision was an uncertainty. Perhaps even a Knightian uncertainty.