Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
  • Contact Me

    This form will allow you to send a secure email to Steve
  • Your Name *
  • Your Email *
  • Subject *
  • Message *

free counters

Twitter Feed

Entries in performance management (42)

Wednesday
Jun132018

A reminder to evaluate the work, not just the person doing the work

Here's a super interesting story from the art world that I spotted in the New York Times and is titled The Artwork Was Rejected. Then Banksy Put His Name To It.

The basics of the story, and they seem to be undisputed, are these:

1. The British Royal Academy puts on an annual Summer Exhibition or Art, and anyone is allowed to submit a piece of art for consideration to be included in the exhibition.

2. The anonymous, but incredibly famous, artist Banksy submitted a painting, but under a (different) pseudonym - 'Bryan S. Gaakman' - which is an anagram for 'Banksy anagram'.

3. 'Gaakman's' submission was declined inclusion in the exhibit by the event's judges.

4. One of the event's judges, contacted Banksy (how one contacts Banksy was not fully explained), to inquire if the famous artist had a submission for the exhibit. This judge did not know that 'Gaakman' was actually Banksy.

5. Banksy submitted a very slightly altered version of the 'Gaakman' piece to the exhibit - and was accepted for the show. Basically, the same art from 'unknown artist' was declined, but for the famous Banksy it was deemed worthy.

What can we take away from this little social experiment? Three things at least. 

 

1. We always consider 'who' did the work along with the work itself, when assessing art, music, or even the weekly project status report. We judge, at least a little, on what this person has done, or what we think they have done, in the past.

2. Past 'top' or high performers always get a little bit of a break and the benefit of the doubt. It happens in sports, when close calls usually go in favor of star players, and it happens at work, where the 'best' performers get a little bit more room when they turn in average, or even below average work. They have 'earned' a little more wiggle room that newer, or unproven folks. This isn't always a bad thing, but it can lead to bad decisions sometimes.

3. What we want, as managers, is good, maybe even great 'work'. But what the organization needs is great 'performers'. Great performers don't always do great work, but over time their contributions and results add up to incredible value for the organization. So in order to ensure that the organization can turn great 'work' into great (and sustainable) long-term performance, every once in a while less than great work, turned in by a great performer, needs to get a pass. Take the long view if you know what I mean.

That's it for me - have a great day!

Wednesday
Apr252018

The downside of performance transparency

Openness, transparency, shared and socialized goals - and progress towards attainment of those goals are all generally seen as positive influences on workplaces, organizational culture, and individual performance. We seem to value and appreciate a better understanding of what other folks are working on, how our own projects fit in with the overall organization, and probably more than anything else - we like the idea that performance management, ratings, promotions, and compensation are, above all else, "fair". And when we have that better sense of what people are working on, how much progress is being made, who in the organization is succeeding, (and when we believe the metrics that define success are also clear and visible), it seems logical that it will translate to increased engagement, productivity, and overall positive feelings about work and the organization.

But, (and you knew there had to be a but), sometimes, openness, transparency, and increased visibility to employee performance and the ability to compare employee performance can drive undesired and even detrimental employee behaviors. And a combination of performance visibility along with the wrong or even misguided employee goals can lead to some really unfortunate outcomes.

Example: What happened when surgeons in the UK began to me measured primarily on patient mortality and these measurements were made much more visible. 

From a 2016 piece in the UK Telegraph:

At least one in three heart surgeons has refused to treat critically ill patients because they are worried it will affect their mortality ratings if things go wrong.

Patients have been able to see league tables showing how well surgeons perform since 2014.

But consultant cardiac surgeon Samer Nashef warned that increased transparency had led to doctors gaming the system to avoid poor scores.

Just under one third of the 115 specialists who responded to Nashef's survey said they had recommended a different treatment path to avoid adding another death to their score. And 84 percent said they were aware of other surgeons doing the same.

So to re-set - UK surgeons were measured on surgical patient mortality outcomes. These outcomes were highly visible in the industry and by the public. And, as humans always seem to learn really quickly, surgeons began to 'game' the system by increasingly avoiding riskier surgeries for the sickest, neediest patients so as not to negatively impact their own ratings. So the sickest patients, with the most difficult cases found it harder to get the treatment they almost certainly needed. And the best, most talented surgeons, who should have taken up these complex cases, learned to avoid them, or pass them off to other, less talented doctors.

So the combination of the wrong, or at least imperfect performance metric, (surgical mortality), with the desire (however well-intentioned) to make doctor performance against this imperfect metric more transparent and visible serve to incent the wrong behaviors in doctors, and reduce the overall quality of care to patients - particularly the ones who were in the most dire circumstances.

The lessons or takeaways from this story?

Be really careful when making employee performance measurements open and transparent across the organization and beyond.

Be even more careful if you decide to focus on a single performance metric, that the metric is actually one that is meaningful and relevant to your organization's customers (and isn't one that can be gamed).

And finally, before you do either of the first two things, you spend some quality time with your organization's best performers to figure out what it is they focus on, how they measure themselves, and how they make sure they are providing the best service possible.

Chances are, in the UK surgeon case, none of the best surgeons would have said they became great surgeons by avoiding the most difficult cases.

That's it, I am out - have a great day.

Wednesday
Feb152017

I know he has the title, but is he believable?

I'm sure you've seen reports of the numerous large and some high-profile organizations that are altering or outright scrapping traditional, ratings-centric performance management processes to move towards a more nimble, flexible, and frequently centered around coaching and development. More forward-looking as opposed to scoring the past as it were.

While the actual results of these new, 'no more ratings' performance programs have so far been mixed at best, it does seem likely that this trend will continue for a little while longer anyway. And one of the by products of these kinds of programs ironically enough, is the generation of more 'perfomance' data, not less, or at least more than in a traditional annual review process. In these new programs, check-ins, kudos, 'real-time' feedback comments, 1-1 meetings, and even micro bonuses or awards will be happening all year long, will need to be soted, assessed, and made sense of in order for these programs to deliver on their goals - namely improved business and individual performance.

I was thinking about this when reading about how one firm, Bridgewater Associates is taking this idea of high-frequency, real-time, and highly data driven approaches to employee performance and development to an incredibly detailed level. 

You should read the entire piece, but here is a snippet from Business Insider piece that sheds a little color on how the firm uses data points on 100+ traits to rate, evaluate, and assess their staff:

Every employee has a company-issued iPad loaded with proprietary apps. One of them, called "Dots," contains a directory of employees and options to weigh in on various elements of each person's work life, categorized in values, abilities, skills, and track record.

There are more than 100 attributes in total, but the collections of attributes are customized to roles in the company, in the sense that an investor's performance would not be measured according to the same traits that would be used to measure a recruiter's performance.

Employees are free to use Dots whenever they'd like, when they want to praise or criticize a colleague for a particular action.

The numerical value of these Dots is considered along with performance reviews, surveys, tests, and ongoing feedback and averaged into public "baseball card" profiles for every employee. The profiles get their name from the list of attributes and corresponding ratings, the same way a baseball card would list something like a player's batting average accompanied by a brief description of their career.

These are then brought into play in meetings where decisions are being made. Using their iPads, colleagues will vote on certain choices, and in the system of believability-weighted decision making, each vote will have a weight depending on the individual's baseball card and the nature of the question.

"A person's believability is constantly relevant," Prince said. "In a meeting, it is relevant to things like how you self-regulate your own engagement in a discussion, how the person running the meeting manages the discussion, and in actual decisions. At all times a person should be assessing their own believability so that they can function well as part of a team."

There's a lot to unpack there, and I am fairly sure that this kind of pervasive, detailed, transparent, and for many, scary, kind of performance/evaluation scheme would not work at most places and for most people. But I think there are (at least) two key features of this system that any organization should think about in terms of their own performance processes.

The first is that the 'Dots' app has the ability to collect, synthesize, and make sense of the many thousands of data points that are generated each year for every employee. So that these interactions, assessments, and bits of feedback are not wasted, or pass off into the ether shortly after they are created. In this way the firm continues to build valuable intelligence about its people and their capability over time. 

And secondly, this information is taken into account when decisions are being made. So that if you have built up credibility over time on a particular subject, your opinion or vote on issues related to that subject carries proportionally more weight than someone less experienced or believable on that issue, regardless of position or title. This data-driven approach to 'Who should we believe about this?' helps the firm guard against 'loudest voice in the room wins' trap that many organizations fall prey to.

Really interesting stuff and while maybe being a little too extreme (and disciplined) for most organizations, the Bridgewater approach to performance might give you at least a general idea of where we are heading - a place where every employee action, interaction, and decision is logged, rated, and contributes to their overall profile. And where that profile is taken into account when decisions need to be made. 

Good stuff for a Wednesday. Have a great day!

Thursday
Nov032016

Feedback

Semi-frequent reminder as we all continue to push further and further into a world with constant, varied, and often very, very imperfect and uniformed feedback on almost everything we do, (I took an Uber ride last night from the airport to a hotel, I can only wonder what my driver was thinking as he 'rated' by performance as a passenger), that lots of the feedback we encounter is basically, crap.

Take a look at the image below, courtesy of the grapheine design blog on some potential client feedback if the classic French Tournee du Chat Noir poster which advertised a Paris cabaret theater was submitted today: (click HERE for a larger version of the image)

Just because we have better and more accessible tools to give each other, our organizations, and other organization's products and services more feedback, (and have that feedback be publicly available), doesn't mean that we, any of us, have somehow gotten better at giving and receiving said feedback.

As the image above describes, even classic, iconic works of art and design could be picked apart by less experienced and talented folks who by virtue of position in an org chart or on a project team feel compelled to pass their judgement on the effort of others. 

I am certainly not saying that having access to more forms and volumes of feedback is a bad thing, I am just reminding you that 'more' doesn't equate to 'better', at least not all the time. 

Le Chat Noir probably doesn't need any improvement. Your last work project might not either.

But if there are people in the organization who are pit into a position where they see it is their job to give you feedback, then feedback it is you will receive. 

Hopefully, it won't be the kind of feedback that compels you to alter your masterpiece either.

Monday
Aug222016

Wanting to win is a great motivator. So is not wanting to come in last place

Over the weekend I was coerced had the opportunity to participate in a 2-mile time trial with my son's high school cross-country track team, and the results of which were pretty sad and interesting at the same time.

Let's step a bit to set some context. I heard about the Saturday morning time trial pretty late on Friday evening and was informed that the cross-country team coach encouraged the student runners to invite their parents and other family members to attend and even compete in the time trial, and in fact, many, many parents would indeed participate in the race. Armed only with that small bit of information, and since I am a very casual two or three times a week jogger, and I knew I could cover the two miles with collapsing, I agreed to show up early on run on Saturday morning.

Fast forward to the actual morning of the race and it turns out that no, 'many, many' parents were not intending to participate in the race. It was just me, one other older guy, (I say older, I probably had him by 8 or 9 years), and about 30 high school cross-county athletes lined up to race the two miles. 

My focus immediately shifted from ' I hope I can run a respectable time' to 'I can't let myself come in last place in this race', as a fairly decent-sized crowd of non-running parents, (as well as all the high schoolers), had gathered to watch the race (and eat donuts and bagels). 

After unsuccessfully feigning a pre-race injury in order to try and back out of the race, I was off and running with the 30-odd kids and the one-odd other old dummy like me tricked into doing this.

Here's how the rest of the race unfolded: first half mile or so I tried to stay connected to the back of the pack of kids, second half mile I lost contact with all but about five of the slowest kids, last mile or so I ended up passing a few kids, (most of whom I later found out were making their very first training run that morning).

And oh yeah, the other 'old man' in the race? He stalked me, about 15-20 yards back for most of the race and then tried to outkick me, (term used very, very loosely), in the last 50 yards or so. Once I realized this, I managed to speed up enough to hold him off at the tape. I ended up placing about 25th out of about 31 or 32. My time, while slow, was about one minute per mile faster than I would normally run.

What's the point of all of this, i.e., why place it on the blog?

I was thinking about how incented I was to raise my performance level not to win or even try to win the race, because there was no chance of that, but to a level where I simply would not be the worst performer. And it worked, to a degree.

The fear of being the worst, and having that be a public thing, drove me to perform better than I would had I been squarely in the middle of a typical pack of weekend 5K runners. I knew I had to push myself to beat even just one other person in the race and avoid the indignity of coming in last.

All performance is relative. It is true in running, and in most every other activity we take on that calls for measurement, (and rewards).  And motivation to perform to be the best, while certainly powerful and meaningful, isn't the only kind of motivation that can drive improved relative performance.

That's is from me. Happy Monday. Have a great week.