This article helps you understand how to interpret and use the different Reviews dashboard cards in Zendesk QA to evaluate agent performance and identify areas for improvement.
This article contains the following topics:
- Understanding how the IQS is calculated
- Understanding how category scores are calculated
- Understanding how individual review scores are calculated
- Understanding how N/A affects QA score calculations
Related articles:
Understanding how the IQS is calculated
Your internal quality score (IQS) is based on your conversation reviews. It represents the average of all review scores received over a specified period, expressed as a percentage.
The IQS is calculated using the following formula:
IQS = (review_score1 + review_score2 + ...) / (number of reviews) * 100
For example, considering the following review scores scenario:
Review | Review Score |
1 | 100.00% |
2 | 9.91% |
3 | 63.96% |
4 | 90.99% |
5 | 0% |
IQS | 52.97% |
The IQS is calculated as follows:
(100% + 9.91% + 63.96% + 90.99% + 0%) / 5 = 52.97%
Understanding how category scores are calculated
When you set up your QA scorecard and define your categories, you also decide the rating scale for each category. This scale is used to determine the category score for an interaction.
It’s calculated using the following formula:
Category score = (score_selected - scale_minimum) / (scale_max - scale_minimum) * 100
And uses the following scores:
Score | 1 | 2 | 3 | 4 | 5 |
Binary scale | 0 | 100% | |||
3-point scale | 0% | 50% | 100% | ||
4-point scale | 0% | 33.3% | 66.6% | 100% | |
5-point scale | 0% | 25% | 50% | 75% | 100% |
Understanding how individual review scores are calculated
Each category on your scorecard has a weight value, represented as an integer from 0 to 100. To calculate the review score for an interaction, multiply each category's score by its weight, then divide the total by the sum of the weights.
If a critical category rating is below 50%, the score is automatically set to zero.
It’s calculated using the following formula:
Review score = (category1_score * category1_weight + category2_score * category2_weight...) / (category1_weight + category2_weight...) >> critical category < 50%, then 0%
For example, consider the following scenario with five grouped categories with different weights, where the agent received the following ratings. Critical categories are marked with an asterisk (*).
Review | Request*(1) | Clarification (3) | Explanation (3) | Writing( 2) | Internal Data (1) | Review Score |
---|---|---|---|---|---|---|
1 | 100% | 100% | 100% | 100% | 100% | 100% |
2 | 100% | 0% | 0% | 100% | 0% | 30% |
3 | 100% | 0% | 100% | 100% | 100% | 70% |
4 | 100% | 100% | 100% | 0% | 100% | 80% |
5 | 0% | 100% | 100% | 100% | 100% | 0% |
Understanding how N/A affects QA score calculations
When evaluating support interactions, you may encounter situations where one or more categories are marked as N/A (Not Applicable).
N/A selections affect the calculation of the final QA score.
When a category is marked as N/A:
- It’s excluded from the final score calculation.
- The final score becomes a weighted average of the remaining rated categories.
- The weights of the rated categories are not adjusted; instead, the score is based only on the weights of the non-N/A categories.
For example, consider a scenario with the following four categories with different weights, where the tone category received an N/A rating:
Category | Weight | Max rating | Value | Normalized score | N/A? |
Clarity | 30 | 4 | 3 | 0.75 | No |
Tone | 30 | 4 | N/A | - | Yes |
Accuracy | 30 | 4 | 2 | 0.5 | No |
Structure | 10 | 4 | 4 | 1.0 | No |
In this scenario, only three categories—Clarity, Accuracy, and Structure—are used in the calculation. Their contributions are as follows:
-
Clarity: 0.75 × 30 × 100 = 2250
-
Accuracy: 0.5 × 30 × 100 = 1500
-
Structure: 1.0 × 10 × 100 = 1000
Total Score = 4750 / 70 = 67.8571%
If all categories are marked as N/A, no final score is calculated. The review will display as “No Score” or remain unscored.
If only one category is rated and all others are marked as N/A, that single category fully determines the final score. Its contribution is calculated normally, and its weight becomes the total weight used in the score calculation.
The same logic applies within groups. If a group contains multiple categories and some are marked as N/A, only the valid categories contribute to the group score. If all categories within a group are marked as N/A, the group score is excluded from the final calculation.