Secret Santa assessment
I’m posting this half-baked idea here because (i) I think it’s neat, (ii) I love the cute name and want to share it, and (iii) it’s likely a little too far outside my area of expertise and will take too much effort for me to (credibly) get through as peer review.
So let’s beta-test it. Is it interesting or useless? Is it novel or has it already been proposed? Let me know!
Introduction
Misaligned and competing individual performance objectives are a root cause of organization dysfunction. If you need help fixing the conference room projector 45 minutes before your meeting, for example, it will not be useful for IT to process your support request tomorrow, even though their key performance indicator (KPI) is number of requests completed within 48 hours. The timeliness of your need, which affects your KPI, is not reflected in the KPIs of other members of your organization.
Here we propose a simple way to unify assessments across an organization when members of the organization are assessed along different dimensions. We leave aside the (significant) challenge of determining these assessment metrics. Indeed, many roles may be difficult to assess clearly and organizations can be misled by ill-informed KPIs. Instead, we focus on a way to ensure that different assessment metrics (KPIs), when applied differently across the organization, do not lead to diverging objectives or competing incentives.
“Secret Santa” assessment
Suppose
Ideally, different scoring functions should align with the overall goals of the organization so members are incentivized to collaborate. However, competing incentives may form when different members have different scoring functions that are not sufficiently aligned. For such situations, we introduce a simple process to more strongly couple or unify incentives across an organization without having to replace or modify the existing KPI system.
For each member
where
Discussion
Why combine scoring functions in this manner, particularly using random pairing? The new score
Random pairing, as opposed to the design of a pairing network specific to the organization, allows for this assessment strategy to be implemented without additional time or effort. In other words, Eq. 1 serves as an information-free coupling of member incentives. Random sampling also helps ensure fairness and prevent perverse incentives: all possible pairings are equally likely and members will not benefit much from “gaming the system” by optimizing their paired KPI scores. We also anticipate each member assessment, for example, quarterly assessments, will be computed using a fresh set of randomly paired
Equation 1 has two parameters,
Of course, there are likely serious flaws with this idea that we have not yet considered. Indeed, we are not currently certain whether this is even a novel idea.