Main types of defects in software testing process
A nightmare of any QA engineer is a bug which returns from production. You work hard, you try to check every possible flow, you test it for 8+ hours per day, and after the release the user within a week reports a critical problem. How is it possible? Why it happens and how to fix it? I split possible defects in process by groups.
#Group 1: Some factor affected the system and you did not include it in your tests.
In our product there is a form where user may upload files to the server. We tested it with different types of files, different file names, different file sizes. All successfully worked. After the release of the product it turned out that 80% of users can not upload their files! It turned out that the files are successfully filled only from the local disk, but they are not downloaded from the USB flash drive, and all these users tried to upload files from it.
Why it's occurred:
It is not enough to test all the declared functional, it is also important to identify all possible external factors affecting the result of the work.
How to fix:
- Before starting testing consider: what factors can influence? For the convenience of structuring, analyze all factors for each action that your application performs
- Communicate with developers! Learn from them, what can influence the work of the application?
- Show your research results (tablets, maps, checklists) to developers, analysts, architects.
- Try to find out the maximum about your final customers. Try to cover at least every use case that might occur. This information can bring new ideas to testing.
#Group 2: Some of possible combinations were ignored
For example we have an online store. After the product is released, many users complain that they were not able to add items to the shopping cart. Later we found out that only small part of items can not be added (according to bug reports from customers) but we were not able to reproduce the problem ourselves. Only after long investigation it turned out that it occurs only if with this type of item we are trying to add item from another category. So the issue is actual only in specific case combination.
Why it's occurred:
Usually it is not enough just to check all the influencing factors, product options - we need to test the combination of their use!
How to fix:
- Find out what previous provided features could affect new one. Often it requires knowledge of the architecture of the product, and developers can help with that information.
- Do regression yourself. Every time when you come up with idea that e.g. bug fix or new feature could affect some stuff, add this idea to your notes. When you will have spare time or regression period do not forget to check your notes.
- Use different methods of combining the checks.
- From combinatorial methods, pay special attention to pairwise, especially in case when "what is connected with?" Is unknown
#Group 3: Lack of time to test
Scenario 1: Assembly - tested - found critical defects - new assembly - tested - critical defects - new assembly - no time to test, no critical defects found - released - oops …
How to fix:
- When there is short time for testing, it's very difficult to separate the most important checks when tests have already begun. So priorities for checks should have been already assigned during preparing test cases.
- Discuss the RC testing period in advance. Day, two, week - the time depends on the size of the product. But this term should be clearly known and understandable to everyone.
- Automation. The most important tests for release should be tested by robots. Robots are faster than human! So make sure that all checks for general functional will be checked by autotests and specific checks will be handled manually.
Scenario 2: Each new sprint includes a new functional, we test only it, we do not have resources to do regression testing. There is a 99% probability that errors will appear after release in related areas when new functionality will be added.
How to fix:
- Automation for regression testing
- We should optimize the test sets. We should use the approaches of the test analysis with a minimum set of checks (maximum checks in one test, pairwise, etc.). We should manage to conduct regression testing at the expense of small costs.
- We should maintain risks. All potential problems related to low resources should be documented and your managers should be notified about them.
#Group 4: Did not notice, uh ...
Sometimes strange things happen. It seems that all checks were in TCs, but we did not notice the bug during testing. There can also be several reasons for this situation.
Scenario 1: Have not noticed !! Yes, it may sound strange, but it happens very often. Simple inattention and abstraction lead to the fact that, despite the responsibility and the desire not to miss anything, the bug is still hidden from us.
How to fix:
- Yes it is bad that you've missed bugs but we are not robots and stuff like that may happen with everyone.
- Give some rest to your brain.
- Think about work when you are there.
- Try to relax and not be QA Engineer at home.
- During workday do some breaks and go to fresh air.
- Try to sleep at least 7 hours per day (not at work of course).
- But if it is permanent problem then it might be better to change your profession.
Scenario 2: Did not know that it was bug. Most often it happens in the insufficiently documented areas of the functional. And in which areas of the functional are usually the least requirements? We can highlight here categories of feature that might be affected by this issue: most complex (since many things are not clear, that's why they were not documented) most obvious ones (no documentation because everyone think that all is clear)
How to fix:
- Find out all unclear moments. You may try to do some brain work here but it will be better to ask person which is already experienced with such functional. Don't be shy because you all are in the same boat
- Try to imagine an unprepared user. How would he expect this functionality to work?
- If you see definitions like "same as today", "relevant result", "correct result" etc. ask questions. Define every possible variable and every formula that used in calculations in order to receive "relevant result".
- Note every question that was asked. Note every formula that was given to you. Believe me, you will be very thankful to your notes.
It seems that the main reasons for missing critical mistakes were listed. Is this a complete list? Of course not! Sometimes mistakes are missed because of ignorance of elementary testing techniques, sometimes because of problems in planning, and sometimes due to lack of required environments, provisioning, etc.
- Some bugs are ALWAYS missed, and this is normal. But please, let it be not critical bugs!!! :)
- Prioritize testing, do not be distracted by third-rate features, always keep track on the MAIN.
- Analyze the reasons of missing defects. Every missed bug should make you stronger. Every time you should ask question to yourself: "How can we improve our process in such way that this will not happen again?".
- Try not to use any solutions simply because you are "used to". Ask yourself EVERY DAY: "Do I use the best tools, techniques, approaches to solve my problems? What can I improve TODAY? "
- And the most important - enjoy your work, and results will be better every day ;-)