Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed

    Entries in data (67)

    Wednesday
    Feb252015

    CHART OF THE DAY: There's Just 5 Million Open Jobs in the USA

    Here's your latest Chart of the Day, courtesy of my two favorite online data sources, the Bureau of Labor Statistics, (specifically the Job Openings and Labor Turnover Summary, or JOLTS report), and the FRED data analysis and visualization tool.

    First, the chart, then some FREE commentary from your humble scribe:

    1. First, the actual numbers - there were 5.028 million job openings in the US on the last business day of December 2014, the highest number since December 2001.

    2. The chart shows a pretty much straight up and to the right climb in job openings since early 2009, meaning talk of the recession and the labor market disruptions it caused are really seeming far, far behind us

    3. This increase in openings is driving organizations like Walmart to raise wages for many of its workers - for a wide range of industries, and geographies, (including previously 'low worker power' ones like retail), the balance of that power is shifting. 

    4. Average weekly earnings for Production and Non-farm employees are climbing as well, not as fast as jop openings, but certainly on the same trajectory.

    So what does this mean for you, Mr. or Ms. HR pro?

    Probably nothing new, or at least nothing you have not been hearing about and likely experiencing in the last 18 months or so. 

    Lots more noise in the system to get your company and your opportunities noticed in a much more crowded market of available jobs.

    Many fewer un- and under-employed individuals around that might not always been qualified for your openings, but at least were a source of steady candidate flow. At the depths of the recession, there were about 7 unemployed workers for every job opening. Today that ratio is less than 2/1.

    You, having a harder time coming up with explanations/excuses to your leadership and hiring managers who (traditionally) are much slower to accept these changes in the labor market and the ensuing power shifts. I recommend forwarding to them the Walmart story above, with a subject line that says 'See, even Walmart is having a hard time finding and keeping people'.

    Long story short, we entering year 6 of an extended recovery/tightening of the labor market. Talent is in shorter supply, opportunities are everywhere, the Dow and the S&P 500 are at record highs, and the people you need to find, attract, and retain are well, harder to find, attract, and retain.

    Have fun, it's a jungle out there.

    Tuesday
    Feb242015

    On trusting algorithms, even when they make mistakes

    Some really interesting research from the University of Pennsylvania on our (people's) tendency to lose faith and trust in data forecasting algorithms (or more generally, advanced forms of smart automation), more quickly than we lose faith in other human's capabilities (and our own capabilities), after observing even small errors from the algorithm, and even when seeing evidence that relative to human forecasters, the algorithms are still superior.

    From the abstract of Algorithm Aversion: People Erroneously Avoid Algorithms After Seeting Them Err:

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

    Let's unpack that some. In the research conducted at Penn, the authors showed that even when given evidence of a statistical algorithm's overall superior performance at predicting a specific outcome (in the paper it was the likelihood of success of MBA program applicants that the humans and the algorithm attempted to predict), most people lost faith and trust in the algorithm, and reverted to their prior, inferior predictive abilities. And in the study, the participants were incentivized to pick the 'best' method of prediction: They were rewarded with a monetary bonus for making the right choice. 

    But still, and consistently, the human participants more quickly lost and faith and trust in the algorithm, even when logic suggested they should have selected it over their (and other people's) predictive abilities.

    Why is this a problem, this algorithm aversion?

    Because while algorithms are proving to be superior at prediction across a wide range of use cases and domains, people can be slow to adopt them. Essentially, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms, because people are more likely to abandon an algorithm than a human judge for making the same mistake.

    What might this mean for you in HR/Talent?

    As more HR and related processes, functions, and decisions become 'data-driven', it is likely that sometimes, the algorithms we adopt to help make decisions will make mistakes. 

    That 'pre-hire' assessment tool will tell you to hire someone who doesn't actually end up beign a good employee.

    The 'flight risk' formula will fail to flag an important executive as a risk before they suddenly quit, and head to a competitor.

    The statistical model will tell you to raise wages for a subset of workers but after you do, you won't see a corresponding rise in output.

    That kind of thing. And once these 'errors' become known, you and your leaders will likely want to stop trusting the data and the algorithms.

    What the Penn researchers are saying is that we have much less tolerance for the algorithm's mistakes than we do for our own mistakes. And maintaining that attitude in a world where the algorithms are only getting better, is, well, a mistake in itself.

    The study is here, and it is pretty interesting, I recommend it if you are interested in making your organization more data-driven.

    Happy Tuesday.

    Tuesday
    Feb102015

    CHART OF THE DAY: The Misery Index

    Spotted on the Pragmatic Capitalism site: The Misery Index Falls to an 8 Year Low.

    First the chart, then a quick explanation of The Misery Index itself, and finally, of course, some FREE 'expert' commentary on what if anything this kind of data means for HR/Talent pros.

    Chart:

    The "Misery" index is the sum of the rate of inflation and the rate of unemployment. It’s name is apropos because a high rate of inflation combined with a high unemployment rate are miserable things to experience. Where is the Misery index currently? It sits at an 8 year low. And perhaps more tellingly, today’s Misery index level of 6.9% is well below the 70 year average of 9.5%

    Indicative of improving economic conditions in the USA overall, we see this reflected in the declining rate of the Misery Index. Times may not be great, and the economic recovery is certainly unequally distributed, but certainly for most the worst years of 2008 and 2009 seem pretty far away at this point.

    What might the 'Misery' index have to tell the HR/Talent pro?

    One thing that comes to mind is that our perception of satisfaction or happiness and even (sorry to use the word again) engagement at work is derived from multiple and complex sources. In HR we talk plenty about engagement rates and trends and voluntary turnover and percentage of job offers accepted, but we usually only talk about these metrics in isolation. 

    We compare this quarter's engagement rate with last quarter's rate. We look at the trend line in voluntary turnover as if this phenomenon exists in a vacuum, and is not impacted or effected by other business conditions.

    We are measuring more things, but probably not getting the deep levels of insight that measurement once promised. 

    The Misery Index is a crude way to acknowledge this truth, that inflation alone or unemployment alone, don't provide all the answers as to the relative health of an economy, and consequently, the 'misery' of its citizens.

    In the workplace, perhaps we should consider our own versions of the Misery Index. Graph disengagement rate AND voluntary turnover rate together and show that to your CEO. Or maybe do a trend line with average annual salary increase against recorded absence rates to see if your 2.3% salary increases are potentially contributing to people checking out.

    Misery, (and I think, happiness too), is a complex thing. Thinking about either of them in one dimension leads to shallow understanding and conclusions of limited value.

    Plus, what is making me miserable today isn't even on any of these charts. Hint: It NEVER stops snowing where I live.

    Happy Tuesday.

    Tuesday
    Jan202015

    CHART OF THE DAY: What does it take for content to get noticed?

    Really interesting piece, (with accompanying chart that I will re-share below), on the GigaOM site on how social and online sharing is now truly the way readers (and potential customer and job candidates) discover content.

    The gist of the article was to point out that while they might like to think they are not in the same business as Buzzfeed, even more 'respected' publishers like the New York Times have to compete with the Buzzfeeds of the online world using modern metrics that describe success in online content creation - namely social shares (Twitter, LinkedIn, Facebook, etc.).

    Check out the chart below, (Email and RSS subscribers may need to click through), then some FREE commentary from me after the data:

    1. It is pretty obvious that for these big publishers, the bar for labeling a piece of content a 'social' success is really pretty high - at least 2K shares. Think about what you and your company might be sharing on social networks from your corporate blog or posting your open jobs on LinkedIn or Twitter. Two thousand shares of piece of content is a ton of shares, yet by the standards of the modern web, that barely starts to get you noticed. Less than 100 social shares leaves your content essentially 'unseen'.

    2. Unless, of course, it is 'seen' by the exact, right people. And that means most of us (me too, just look at the number of RTs of this post for example), have to really understand how to determine, classify, target, and attempt to engage a specific target market of interest in order to have success. There is almost no way any of us 'normals' are ever going to approach mass social virality like the masters of the modern web (Buzzfeed, HuffPo) can. If you post a job on Twitter and it is not RT'ed does it even exist?

    3. For the HR Tech spin on things, if you have employed a social sharing strategy for your jobs and employer brand building content, but you are not utilizing one of the several HR tech tools on the market that provide the capability to track, analyze, and help you determine actual results (clicks, shares, applicants, hires), for your jobs content, then you probably need to consider that investment in 2015. Since the easy and most common measure of success on the social web, absolute number of 'shares', is almost always going to leave you in the 'unnoticed' bucket, you need to find a way to 'prove' your social strategy is actually working. And the only way to do that is to better understand what happens to those lonely tweets after you send them out into the big, scary, social web.

    Happy Tuesday. Hope this post breaks out of the 'unnoticed' category.

    Tuesday
    Jan132015

    What Will Happen if we Move the Company: The Limits of Data

    Some years back in a prior career (and life) I was running HR technology for a mid-size organization that at the time had maybe 5,000 employees scattered across the country with the largest number located on site at the suburban HQ campus (where I was also located). The HQ was typical of thousands of similar corporate office parks - in an upscale area, close to plenty of shops and services, about one mile from the expressway, and nearby to many desirable towns in which most of the employees lived. In short, it was a perfectly fine place to work close to many perfectly fine places to live.

    But since in modern business things can never stay in place for very long, a new wrinkle was introduced to the organization and its people - the looming likelihood of a corporate relocation from the suburban, grassy office park to a new corporate HQ to be constructed downtown, in the center of the city. The proposed new HQ building would be about 15 miles from the existing HQ, consolidate several locations in the area into one, and come with some amount of state/local tax incentives making the investment seem attractive to company leaders. Additionally, the building would be owned vs. leased, allowing the company to purpose-design the facility according to our specific needs, which, (in theory), would increase overall efficiency and improve productivity. So a win-win all around, right?

    Well as could be expected once news of the potential corporate HQ relocation made the rounds across the employee population, complaints, criticism, and even open discussions of 'time to start looking for a different job' conversations began. Many employees were not at all happy about the possible increase in their commuting time, the need to drive into the 'scary' center city location each day, the lack of easy shopping and other service options nearby, and overall, the change that was being foisted upon them.

    So while we in HR knew (or at least we thought we knew), there would be some HR/talent repercussions if indeed the corporate HQ was relocated, we were kind of at a loss to quantify or predict what these repercussions would be. The best we were able to do, (beyond conversations with some managers about what their teams were saying), was to generate some data about the net change in commuting distance for employees, using a simple and open-source Google maps based tool.

    With that data we were able to show that (as expected), some employees would be adversely impacted in terms of commuting distance and some would actually benefit from the HQ move. But that was about as far as we got with our 'data'.

    What we didn't really dive into (and we could have even with our crude set of technology), was break down these impacts by organization, by function, by 'top' performer level, by 'who is going to be impossible to replace if they leave' criteria.

    What we couldn't do with this data was estimate just how much attrition was indeed likely to occur if the move was executed. We really needed to have an idea, (beyond casual conversations and rumor), who and from what areas we might find ourselves under real pressure due to possible resignations. 

    And finally, we had no real idea what remedial actions we might consider to try and stave off the voluntary and regrettable separations (the level of which we didn't really know).

    We basically looked at our extremely limited data set and said, 'That's interesting. What do we do with it?'

    Why re-tell this old story? Because someone recently asked me what was the difference between data, analytics, and 2015's hot topic, predictive analytics. And when I was trying to come up with a clever answer, (and I never really did), I thought of this story of the corporate relocation.

    We had lots of data - the locations of the current campus and the proposed new HQ. We also had the addresses of all the employees. We had all of their 'HR' data - titles, tenure, salary, department, performance rating, etc.

    We kind of took a stab at some analytics - which groups would be impacted the most, what that might mean for certain important areas, etc. But we didn't really produce much insight from the data.

    But we had nothing in terms of predictive analytics - we really had no idea what was actually going to happen with attrition and performance if the HQ was moved, and we definitely had no ideas or insights as to what to do about any of that. And really that was always going to be really hard to get at - how could we truly predict individual's decisions based on a set of data and an external influence that had never happened before in our company, and consequently any 'predictions' we made could not have been vetted at all against experience or history?

    So that's my story about data, analytics, and predictive analytics and is just one simple example from the field on why this stuff is going to be hard to implement, at least for a little while longer.