Pink Elephant IT Management Metrics Benchmark Service Blog – Incident Management
Earlier this year, we launched the Pink Elephant IT Management Metrics Benchmarks Service. We now have Preliminary Incident Management (IM) Benchmarks based on the initial responses to the Incident Management Metrics Survey. The more participants in all our metrics benchmark surveys, the better!
We welcome your feedback. Please comment on this blog post to let us know what you think.
One surprising item is the Basis for Incident Resolution Interval Expectation with almost a quarter having none documented. The rest rely on Standards and/or SLAs.
The metrics and the organizational attributes in the IM survey responses cover a wide spectrum. All survey response options have been selected by participants with no strong bias to any one response to any question. Medians and Means are approximate as they are based on range mid-points and estimated minimums and maximums where required. Since the survey uses non-linear ranges to ease data gathering and response by survey participants and except where the response options are narrow, there is a fairly large difference between the Median (center point of all responses ordered by value) and the Mean (normal average: total of all responses divided by the number of responses) drawn from the survey responses.
One preliminary question about the definition of FCR.
There are several definitions out there - one popular one that I find incredibly difficult to measure is that the case is “resolved or dispatched/escalated correctly at first contact”
Basically, it excludes any “warm transfers or required callbacks”
Do you have a benchmark for the average resolution efforts per incident.