Defining Test Automation Metrics
I am absolutely convinced that the approaches and methods of testing need to be constantly improved. We have been working already a lot of years and time to time we saw new testing methods and ideas. We implemented them but recently we understood that we can not work in the same way as before. We understood that we should re-build or strategy. Our methods are not effective as new one and we can not make them work together in effective way. We did not see it in time and now we should perform major changes in our testing flow and test strategy.
So what should be done to prevent such major changes which could lead to problems with estimate?
We talked with outsource test analysts about such improvements of our approaches and the introduction of new techniques. We began to think about the goals that we want to reach. How do we know that this evolution will have a positive impact? How do we determine that the new idea will really improve our processes?
We asked test analysts, business analysts and product owners how to measure the success of testing. So the most important question here is metric. So in order to know "how much" we should not what does "much" mean.
Below is a summary of tracking progress which was made during communication, which should be considered as parameters to apply metrics. I also tried to outline my vision for evaluating the success of testing.
1. Metrics associated with the number of defects.
a. The number of defects found in the testing process.
(The sense of the metric: if we found a lot of defects in the testing phase, these defects will not reach the user. Consequently, the quality of our product has been increased.)
b. The number of defects in the release
(The sense of the metric: if no bugs were found on release after some time then we could consider it as ok result. The main goal here is to cover all possible use cases. There will be bugs but less is more.)
c. The number of critical defects in the release.
(The meaning of the metric: If critical bugs were missed then test methods are not ok and whole process should be reviewed.)
Problems here and open questions which will be different on each project:
- How will bug severity set. How is the bug important? There are bugs in any product, but do they matter to the business? So severity should be set up according to a lot of factors.
- Critical bug or not is important but pay attention that if there are a lot of minor bugs it is not ok and you may lose your customers due it. Will be your testing successful in such case?
- If can not find more bugs it does not mean that code is good. And vice versa - it's possible that there were very few bugs in the code initially. And whose result will be better in scope of quality - testers or programmers?
2. Metrics that measure time.
a. Time spent on testing.
(The sense of the metric: If release was done fast and estimates were good it is excellent. Therefore, the less time were spent on testing, the more successful it was (probably, of course if no critical bugs were not caught.)
b. The time which was passed since last bug was reported.
i. The sense of the metric: If no bugs were found for a long time mb you should stop current testing? When you should to stop testing you may read in another article here. So what time here should be defined is depended of the current situation on the project.
c. Problem areas:
- Speed - does not mean high quality.
- If you can not find bugs, it does not mean that there are no bugs. Perhaps all obvious bugs were found, but some special cases were not considered (like bad internet connection, another IP address, some configurations etc.) and this may cause problems on production.
a. Percentage of covering all possible use-cases with tests.
(The sense of the metric: It can be done in case if all possible use-cases were described by product manager. In such case 100% coverage is possible)
b. The number of acceptance criteria that the product satisfies.
(The meaning of the metric: Sometimes we see requirements document which called acceptance plan. It contains not all possible use-cases but most mandatory requirements that should be met. If it works as it should, then we created exactly what we were going to create.)
c. Questions for reflection:
(This approach can lead to the fact that you will be blind by verifying requirements without trying to "drop" the program or to test how it behaves in an unusual use case.
(The product is constantly evolving, there are many complex relationships in it. You should not focus only on changes and additions in it - the new functionality can break something that has not to be changed. When testing new features, do not forget about regression.)
4. Scope of testing
a. Estimates of testing
(The sense of the metric: Usually it depends on project. Usually time is the high cost resource but sometimes project is so important or inactive that time does not matter much. Anyway we should consider time that was given for testing in our metric system. More time = less bugs = higher quality.)
b. Exclude part of areas that should not be tested
(The meaning of the metric: Sometimes due to lack of time it could be decided to not perform some tests. In such case there is a big opportunity for bugs to appear. Those bugs will affect product quality but it was a measured decision.
c. Possible problems:
- Large amount of testing can be a waste of time. So you should understand that return on investment (ROI) could be less than you expect. It should be balanced decision, what exactly it is necessary to test, and what can be missed. This decision is always associated with certain risks. Last time you did not test this area, the product did not suffer from it - will you check it next time? Affect the decision may affect whole project.
My view on how to measure the success of testing
I believe that these points of view have the right to exist, but it is necessary to concentrate on the contribution of testing to the successful launch of the product. If our product is successful, then our testing was successful. But to make such a verdict, you need to clearly understand the factors that affect the success of the product. And even knowing them, give an unambiguous answer to the question, whether it would be successful or not.
One clever man said "You can't control what you cannot measure". I will say "You can not measure things which are out of control." So control processes at your project and be happy.