Defect Metrics as an indicator of the test team’s effectiveness

Defect Metrics as an indicator of the test team’s effectiveness

Defects, identifying which form one of the primary goals of a test team, go a long way in showcasing how effective a test team has performed in testing a given product. Defects can be excellent indicators of both the quality of the test effort as well as the quality of the product. However, the right metrics need to be used in both these cases, at the right stage in the product life cycle to keep tabs on overall performance and look for areas of improvement. In this blog post, I will be discussing important metrics that the test management team can use to measure the quality of the test effort (which is indicative of the test team’s effectiveness). In a subsequent blog, I will discuss metrics that can be used to measure the quality of the product.

  1. Number of defects missed in a release: This is a very important indicator of the team’s performance. That said, this is a more reactive indicator where this data can only be used for a postmortem analysis of that release and learnings implemented in the next release. However, given that this is based on bugs found post release, by various users of the product, this is a very objective data point on the team’s performance, which will greatly help improve the test coverage and team’s effectiveness in the next release. Another metric which is in a similar category, but at least gives the test management team some time to act before a product release is “bugs found in a bug bash around product release, which should have been found sooner”. This metric at least helps decide whether or not the test team is ready to sign off on a product release, giving some opportunity to be proactive before the product reaches the hands of the end users.

 

  1. Number & Type of defects reported by the test team vs. reported by other disciplines: Although, a test team is the one primarily chartered with testing the product and reporting defects, several other teams associated with the product such as development, marketing, program & project management, cross groups associated with the product etc. will also have an opportunity to use the product and report issues over the course of the product life cycle (PDLC). These numbers when tracked at a periodic frequency over the PDLC, is a very useful proactive metric for the test management to identify coverage or effectiveness or areas of focus loopholes, in the test effort and fix them right away.

 

  1. % of valid and invalid bugs: Consider not just fixed bugs in your valid count. Including bugs resolved as “external”, “postponed or won’t fix” is equally important since these are still valid bugs. Consider bugs resolved as “by design”, “not reproducible”, “duplicate” for the invalid count, since they often show that the test team has not understood the product well enough, has not done its home work well enough before filing the bug all of which just lead to a waste of time and effort for several people on the team who are looking at these bugs. A periodic analysis of this metric, in fact, I would say on a monthly basis is an excellent indicator of how the test team is performing. A more granular analysis can even be done on specific testers if you want to work with him/her to improve individual performance. This is a good proactive metric if used regularly to help improve the test team’s effort.
  2. Average time to defect finding and defect closure: Time to defect finding, shows how effective the test team has been in finding a bug soon after it was introduced. The sooner it is found, the less costly it is to fix the defect, making the overall cost of the product lower. Similarly, tracking the time the team takes to regress bugs shows how quickly the test team is responding to resolved bugs. The sooner they can resolve them, the faster will be the process to catch any regressions. Both these are proactive metrics a test manager can use to determine how effective the team is and fix any areas of lacunae.

In the above blog, I’ve discussed both proactive and reactive metrics that can be used at varied stages in the PDLC, which when tracked effectively and used for the team’s improvement as well as representation to senior management on how the test team is performing, go a long way in establishing a strong base for the test team in the overall product team’s positioning.

About the Author

Avatar QA InfoTech
Established in 2003, with less than five testing experts, QA InfoTech has grown leaps and bounds with three QA Centers of Excellence globally; two of which are located in the hub of IT activity in India, Noida, and the other, our affiliate QA InfoTech Inc Michigan USA. In 2010 and 2011, QA InfoTech has been ranked in the top 100 places to work for in India.