What Drove Your
Last Performance Evaluation?
by Dr. Jack Zenger
A great deal is being written today about companies changing their performance management programs. Those companies who’ve abandoned traditional performance appraisals have gained notoriety. However, the overwhelming majority are keeping some form of evaluation of performance. We suspect that there are many reasons for that:
• Employees want to know how they are doing.
• Organizations need some measure and record in case of discipline or termination.
• Organizations seek to reward the highest performing employees with greater compensation, and need justification and documentation for that.
The old form of performance appraisal, however, is vanishing. The practice of reducing an individual’s year-long performance to a single digit or word is falling out of favor. Rather than benefitting the recipient, it often does harm.
Are evaluations ever to be trusted?
An article in Harvard Business Review, argued that most performance evaluations are more an expression of the person doing the rating, then of the individual being evaluated. Researchers have identified bias that creeps into evaluations that the article suggests could account for as much as 62% of the evaluation.
They identified three primary sources of rater errors:
1. The overall leniency of the person doing the rating.
Most of us can identify with having a teacher or professor who was a harsh grader versus one who was easy-going and more generous.
2. The “halo” effect.
The overall perceptions about a person tend to color the rating of each of the specific behaviors and traits that describe that person’s performance. It is easy to have the individual scores in any evaluation process influenced by the overall perception.
3. Positional influence.
We have all heard the observation that “where you stand depends on where you sit.” A manager’s ratings are different from a peer’s or an assistant’s, in part because of the different relationship they have with the person being rated.
Researchers pointed out, however, that while the absolute rating from two evaluators may be quite different, the rank order of each specific score about the individual’s capability and behavior obtained from each rater displayed a remarkable similarity. The high and low scores would generally be the same. The behavior of the person being rated is obviously strongly influencing the scores they receive, even though one rater is measuring in centimeters and another is using inches and feet.
What behavior actually drives a person’s evaluation?
Though it is never perfectly accurate, my colleague Joe Folkman and I strongly support the view of the researchers as they explained that the behavior of the individual being rated is the major driving force behind performance evaluation. Any bias in that process can be greatly reduced simply by having multiple raters participate in the individual’s evaluation. In another Harvard Business Review article we shared that if a manager solicits the opinions of three peers and three subordinates, the combined elements of rater bias are now reduced to 24% of the total score. The performance of the person being rated is now more than two thirds (68% to be exact) of the final score. Using more than seven raters would obviously bring that percentage to an even higher level.
Read more
here.
Source:
Forbes