The Value We Bring
to Our Clients
Partnership with DeviQA has let our clients significantly improve the efficiency of all QA-related processes: test coverage up to 97%, accelerated product releases, better team productivity, higher client satisfaction and loyalty, and many more.
Partner with DeviQA: see the difference
Test coverage :
90%
2.5 weeks to run regression testing
1 day to run regression testing
7 days to run smoke testing
1 day to run smoke testing
Outdated test cases
60% increase in the number of regression tests and relevant test cases
50% test coverage
90% test coverage
Supported localizations were not tested
27 localizations supported by QA engineers
No automation testing
>1500 automated test scenarios created
Dev capacity:
+15%
Didn’t have a formal QA process
Built smooth and well-working processes
Didn’t make automated testing
Test automation : smoke, API, UI on all supported browsers
Testing was carried out by the devs
Full coverage of the application by test cases
![](../../static/images/companies/arklign.png)
Test coverage :
97%
Existing workflow in Jira didn’t allow to understand how many bugs were left unfixed before the release and their priorities
Adjusted the Jira workflow, integrated TestRail with Jira to be able to see statisticls of the issues per release
The lack of test documentation
Developed necessary test artifacts that adhere to industry standards and are easily maintainable
No Automation test coverage
Added multithreading to run tests in parallel on 10 different threads, which reduced the time of the test run in 7 times
Regression testing time
2x
No formal testing process
2 days to run regression testing
>1500 released features
>5k of the critical/blocker/major bugs detected
>7500 test cases written
>10 trained QA specialists on the client’s side
Test coverage:
90%
A product was unstable and loaded with bugs
The product became stable and robust
There were no automated tests
Automated testing was introduced
There was a lack of test documentation
Created and maintained all required test documentation
Reported bugs:
2.5k+
A multitude of undetected bugs
> 60% high priority bugs reported
~10 specs with unit tests
1400+ E2E automation scripts created
New features were not covered by automated tests
90% of the delivered features are covered with autotests
Manually executed smoke testing
10 mins to run the entire automated smoke testing suite
![](../../static/images/case-studies-companies/cydefLogo.png)
Test cases added:
1.9k+
Non-documented infrastructure creation process
< 2 weeks to create the infrastructure for a new client
2 weeks to run regression testing
>1900 test cases added
1 month to create the infrastructure for a new client
>700 automated tests added
No test cases
98% test coverage of the desktop agent
Test Time:
20h
2h
20 hours to run regression testing
2 hours to run regression testing
10% test coverage
80% test coverage
1 machine for running tests
30 different machines for running autotests
1 thread for running tests
10 threads for running tests on each machine
Releases per day:
4
5
1 huge release every 1-2 months
4-5 releases per day
1 large team with an unclear scope of work
~15 squads each with a straight and clear working plan
0 test cases
200+ e2e test cases for the whole app
1-2 months to release a new feature
2 weeks max to release a new feature
Convoluted and complex process of urgent bug-fix releases
Clear and fast way to release any fix
Regression Testing Time:
2w
2h
The manual testing activities were time-consuming and took up to 2 weeks to complete
Automation regression testing took 2 hours instead of 2 weeks of manual testing activities
There was no formal QA process on the project
We established a formal QA process by designing and presenting it to the rest of the team for familiarization and implementation
![](../../static/images/companies/sprinklrLogo2.png)
Test coverage:
90%
The architecture of the test suite could not be scaled and was difficult to maintain for a large number of tests
Designed the architecture of the test framework from scratch
Tests couldn’t be integrated with other testing and DevOps tools
Integrated auto tests with Jenkins, TestRail, and Jira to have a complete test ecosystem up and running
Tests took many hours to be completed
Built a test suite that ran auto-tests using 16 threads on multiple machines
![](../../static/images/companies/softNas.png)
Test coverage:
80%
Test suite was developed without maintenance and scalability support in mind
We conducted a comprehensive review of the existing test suite, identified areas of improvement and implemented best testing practices
The high number of issues or defects that were present in the software
The number of issues reduced to 25 % after automation was properly implemented
Slow test results and the lack of stable automated tests, leading to hindered deployment capabilities
SoftNAS experienced several benefits, including faster test results and the acquisition of stable automated tests, resulting in improved deployment capabilities
Supported platforms:
4
Developers were responsible for testing, and they executed it chaotically
>90% of the application was covered with test cases
Many bugs were overlooked during regular test runs
100% coverage was ensured for smoke and sanity testing
There was a lack of test documentation
>300 blocker issues were detected
There were many major and critical issues in a production environment
The number of customer support issues was reduced by more than 50%
![](../../static/images/companies/therapyBrands.png)
All versions were released at the appointed time
Lack of QA resources
Created and fully stabilized the testing process
No ability to execute regression testing before each release
The first release was completed three months after we started working on the project
Prevented huge number of blockers and crashes caused by the big amount of merging
Test coverage:
95%
>70 hours to run regression testing
18 hours to run regression testing
>20 hours to run smoke testing
4 hours to run smoke testing
Not all product modules were covered with autotests
100% coverage of all existing sub-products
<100 mobile automation tests
5 generations of OS for iOS and Android and different devices supported
Test coverage:
90%
2 weeks to run regression testing
6 hours to run regression testing
Manual API testing
>2000+ automated scripts for API testing
No smoke tests
Smoke testing taking 1 hour to run
No tests were executed after developers’ PRs
A mini API test suite is executed after each PR, taking 10 minutes only
No automation testing
>4000+ automated UI + API tests
E2E tests automated:
>75
>3 hours to execute manual smoke testing
~30 minutes to execute automated smoke testing
~90% of test cases were outdated
100% test cases reviewed
No automation testing
>900 test cases are up to date and integrated into the workflow
No load tests
>600 test cases updated
No performance testing
100% of the smoke tests automated
Parallel threads:
15
Outdated BE autotests
All existing autotests are up-to-date
Only smoke tests were automated on BE
~2800 autotests added
No autotests on FE
>90% of test cases are automated
Manual smoke testing on FE
~99% of smoke tests are automated
Manual release testing
~95% decrease in post-release regression bugs, the implementation of test automation has proven to be highly effective
Bugs reported:
3000+
No formal QA process
Fine-tuned QA process set up from scratch
No automated testing
Comprehensive automation testing process implemented from scratch
No CI/CD
Transfer to a new CI/CD
>12 hours to run a regression test suite
4 hours to run a regression test suite
No parallel threads
5 parallel threads
![](../../static/images/case-studies-companies/simplifieldLogo.png)
Test coverage:
95%
Lack of test cases
>400 reported bugs
Testing was executed by team members who had no expertise in QA
>1200 reported defects
Lack of testing on real devices
>50 reported improvements
No automated testing
>3 successful releases of a new mobile application
Test coverage:
90%
No formal testing process
90% test coverage
4 localizations supported
>600 automated tests created
>10% of the translations are covered with automated tests
Test coverage:
95%
~2000 autotests
~3000 autotests
No performance autotests
k6-based autotests were integrated into a release flow
Low API usage for test data setup
~90% of test data is generated by API, resulting in 2.5 times reduction in execution time
Outdated Cypress version was used
The latest version of the Cypress framework is used
Test coverage:
90%
Unstructured QA process
<10 mins to run automated smoke testing
No automated testing
<1 hour to run automated regression testing
Manual execution of smoke testing on FE
~90% of the app is covered with test cases
Manual execution of regression testing
>80% of the delivered features are covered with autotests
Test coverage:
75%
Unstable software product with glitches and defects
1.3k automated scenarios in total
Manual test execution
1.5 hours to run automated regression testing
No regression testing
2 applications are covered with automated regression tests
No CI environment
1k+ test cases created
Test cases created:
2.5k+
40% test coverage
>90% of the software is covered with detailed and relevant test cases
Manual test execution
>400 automated scripts were created in the first 6 months
~1.5 weeks to run regression testing
3 days to run regression testing
~2 weeks to move stories forward
2x faster task handling
Test coverage:
90%
There was no formal QA process
An efficient and transparent QA process was set up from scratch
There was no test documentation
Well-structured test cases were created to cover the main scenarios
Automation scripts were useless because they didn’t check the required functionality
Following best practices, automation testing utilized independent tests, API-driven data creation, and the Page Object Pattern
Test coverage:
90%
There were no test cases.
6,000+ test cases have been created for the web apps
There were no automation scripts
2,500+ automation tests have been created
There were no mobile tests
~200 automation mobile tests have been created
There were no reporters
Allure reporter has been introduced
Test cases added:
250+
>3 hours were needed to execute manual smoke testing
<1 hour is needed to execute manual smoke testing
The test documentation was poor and irrelevant
100% of the test cases were reviewed
There was no comprehensive Postman suite for API testing
150+ Postman test cases were created and integrated into the workflow
Test coverage:
>80%
4 days were needed to run regression testing
~2 hours were needed to run regression testing
Regression tests were executed on one device only
5 various devices with different operating systems, screen resolutions, and browsers were used to run regression testing
There were less than 200 automated tests
>1000 automated tests were developed
1 browser was used
4 browsers were supported
Test coverage:
99%
There was no formal QA process
Manual story testing and verification was built from scratch
There was a lack of QA resources to test all platforms and product areas
99% of the application was covered with test cases and the smoke and regression checklists
The test documentation was incomplete
A well-structured test plan enabling thorough validation was created
Testing was executed by developers
A fully autonomous QA process was set up where QA engineers were fully responsible for testing
Test cases developed:
500+
No formal testing process
1 working day was needed to complete regression testing
<60 minutes were needed to finish smoke testing
5+ platforms were supported
5 localizations were supported
>75% of the Shops functionality was covered with regression tests
500+ E2EE test cases were created
Regression testing run:
2x
2 weeks were needed to run regression testing
<1 week is needed to run regression testing
Regression testing was performed only on 2 web devices
>6 various devices with the latest OS versions are used to run regression testing
There were no auto-tests
>150 automated tests have been created
>100 bugs have been reported
3 browsers are supported
2x faster task handling
Test coverage:
>85%
Regression testing was not executed
3 hours were needed to complete regression testing
The product contained a lot of bugs and had a bad UX
>300 functional and non-functional bugs were reported
There were no automated tests
350+ automation scenarios were created
Test coverage was low
>85% of the product was covered with automated tests
There were no test cases
350+ test cases were created from scratch
Bugs reported:
>1000
The QA process was unstructured and inconsistent
A QA team participates in creating ticket scope for upcoming releases
The previous QA team was not efficient enough
All the required test documentation has been rewritten
The test documentation was incomplete and out of date
>6,000 test cases have been written for a regression test suite for client applications
There were a lot of issues reported by clients
The DeviQA engineers have reduced the number of bugs reported by clients/therapists
Test coverage of the API:
100%
A bug-tracking system was not used
280+ regression E2EE cases have been created, which let a team reduce the time of regression testing to 1.5 days
There were no regression checklists
~40% of tests have been automated, and this value is growing
There was poor documentation coverage
100% of the core flow is covered with documentation
The tests were not automated
5+ performance suites have been created
Tests refactored:
~130
~120 failures were in a regression test run
0-1 failure is in a regression test run due to the solid test logic
Static data was leveraged
All test data is generated on the fly
There was no mechanism to clean up the data
Mechanisms for data cleaning have been created
Static test users were used
Dynamic test users are used
Test coverage :
90%
2.5 weeks to run regression testing
1 day to run regression testing
7 days to run smoke testing
1 day to run smoke testing
Outdated test cases
60% increase in the number of regression tests and relevant test cases
50% test coverage
90% test coverage
Supported localizations were not tested
27 localizations supported by QA engineers
No automation testing
>1500 automated test scenarios created
Dev capacity:
+15%
Didn’t have a formal QA process
Built smooth and well-working processes
Didn’t make automated testing
Test automation : smoke, API, UI on all supported browsers
Testing was carried out by the devs
Full coverage of the application by test cases
![](../../static/images/companies/arklign.png)
Test coverage :
97%
Existing workflow in Jira didn’t allow to understand how many bugs were left unfixed before the release and their priorities
Adjusted the Jira workflow, integrated TestRail with Jira to be able to see statisticls of the issues per release
The lack of test documentation
Developed necessary test artifacts that adhere to industry standards and are easily maintainable
No Automation test coverage
Added multithreading to run tests in parallel on 10 different threads, which reduced the time of the test run in 7 times
Regression testing time
2x
No formal testing process
2 days to run regression testing
>1500 released features
>5k of the critical/blocker/major bugs detected
>7500 test cases written
>10 trained QA specialists on the client’s side
Test coverage:
90%
A product was unstable and loaded with bugs
The product became stable and robust
There were no automated tests
Automated testing was introduced
There was a lack of test documentation
Created and maintained all required test documentation
Reported bugs:
2.5k+
A multitude of undetected bugs
> 60% high priority bugs reported
~10 specs with unit tests
1400+ E2E automation scripts created
New features were not covered by automated tests
90% of the delivered features are covered with autotests
Manually executed smoke testing
10 mins to run the entire automated smoke testing suite
![](../../static/images/case-studies-companies/cydefLogo.png)
Test cases added:
1.9k+
Non-documented infrastructure creation process
< 2 weeks to create the infrastructure for a new client
2 weeks to run regression testing
>1900 test cases added
1 month to create the infrastructure for a new client
>700 automated tests added
No test cases
98% test coverage of the desktop agent
Test Time:
20h
2h
20 hours to run regression testing
2 hours to run regression testing
10% test coverage
80% test coverage
1 machine for running tests
30 different machines for running autotests
1 thread for running tests
10 threads for running tests on each machine
Releases per day:
4
5
1 huge release every 1-2 months
4-5 releases per day
1 large team with an unclear scope of work
~15 squads each with a straight and clear working plan
0 test cases
200+ e2e test cases for the whole app
1-2 months to release a new feature
2 weeks max to release a new feature
Convoluted and complex process of urgent bug-fix releases
Clear and fast way to release any fix
Regression Testing Time:
2w
2h
The manual testing activities were time-consuming and took up to 2 weeks to complete
Automation regression testing took 2 hours instead of 2 weeks of manual testing activities
There was no formal QA process on the project
We established a formal QA process by designing and presenting it to the rest of the team for familiarization and implementation
![](../../static/images/companies/sprinklrLogo2.png)
Test coverage:
90%
The architecture of the test suite could not be scaled and was difficult to maintain for a large number of tests
Designed the architecture of the test framework from scratch
Tests couldn’t be integrated with other testing and DevOps tools
Integrated auto tests with Jenkins, TestRail, and Jira to have a complete test ecosystem up and running
Tests took many hours to be completed
Built a test suite that ran auto-tests using 16 threads on multiple machines
![](../../static/images/companies/softNas.png)
Test coverage:
80%
Test suite was developed without maintenance and scalability support in mind
We conducted a comprehensive review of the existing test suite, identified areas of improvement and implemented best testing practices
The high number of issues or defects that were present in the software
The number of issues reduced to 25 % after automation was properly implemented
Slow test results and the lack of stable automated tests, leading to hindered deployment capabilities
SoftNAS experienced several benefits, including faster test results and the acquisition of stable automated tests, resulting in improved deployment capabilities
Supported platforms:
4
Developers were responsible for testing, and they executed it chaotically
>90% of the application was covered with test cases
Many bugs were overlooked during regular test runs
100% coverage was ensured for smoke and sanity testing
There was a lack of test documentation
>300 blocker issues were detected
There were many major and critical issues in a production environment
The number of customer support issues was reduced by more than 50%
![](../../static/images/companies/therapyBrands.png)
All versions were released at the appointed time
Lack of QA resources
Created and fully stabilized the testing process
No ability to execute regression testing before each release
The first release was completed three months after we started working on the project
Prevented huge number of blockers and crashes caused by the big amount of merging
Test coverage:
95%
>70 hours to run regression testing
18 hours to run regression testing
>20 hours to run smoke testing
4 hours to run smoke testing
Not all product modules were covered with autotests
100% coverage of all existing sub-products
<100 mobile automation tests
5 generations of OS for iOS and Android and different devices supported
Test coverage:
90%
2 weeks to run regression testing
6 hours to run regression testing
Manual API testing
>2000+ automated scripts for API testing
No smoke tests
Smoke testing taking 1 hour to run
No tests were executed after developers’ PRs
A mini API test suite is executed after each PR, taking 10 minutes only
No automation testing
>4000+ automated UI + API tests
E2E tests automated:
>75
>3 hours to execute manual smoke testing
~30 minutes to execute automated smoke testing
~90% of test cases were outdated
100% test cases reviewed
No automation testing
>900 test cases are up to date and integrated into the workflow
No load tests
>600 test cases updated
No performance testing
100% of the smoke tests automated
Parallel threads:
15
Outdated BE autotests
All existing autotests are up-to-date
Only smoke tests were automated on BE
~2800 autotests added
No autotests on FE
>90% of test cases are automated
Manual smoke testing on FE
~99% of smoke tests are automated
Manual release testing
~95% decrease in post-release regression bugs, the implementation of test automation has proven to be highly effective
Bugs reported:
3000+
No formal QA process
Fine-tuned QA process set up from scratch
No automated testing
Comprehensive automation testing process implemented from scratch
No CI/CD
Transfer to a new CI/CD
>12 hours to run a regression test suite
4 hours to run a regression test suite
No parallel threads
5 parallel threads
![](../../static/images/case-studies-companies/simplifieldLogo.png)
Test coverage:
95%
Lack of test cases
>400 reported bugs
Testing was executed by team members who had no expertise in QA
>1200 reported defects
Lack of testing on real devices
>50 reported improvements
No automated testing
>3 successful releases of a new mobile application
Test coverage:
90%
No formal testing process
90% test coverage
4 localizations supported
>600 automated tests created
>10% of the translations are covered with automated tests
Test coverage:
95%
~2000 autotests
~3000 autotests
No performance autotests
k6-based autotests were integrated into a release flow
Low API usage for test data setup
~90% of test data is generated by API, resulting in 2.5 times reduction in execution time
Outdated Cypress version was used
The latest version of the Cypress framework is used
Test coverage:
90%
Unstructured QA process
<10 mins to run automated smoke testing
No automated testing
<1 hour to run automated regression testing
Manual execution of smoke testing on FE
~90% of the app is covered with test cases
Manual execution of regression testing
>80% of the delivered features are covered with autotests
Test coverage:
75%
Unstable software product with glitches and defects
1.3k automated scenarios in total
Manual test execution
1.5 hours to run automated regression testing
No regression testing
2 applications are covered with automated regression tests
No CI environment
1k+ test cases created
Test cases created:
2.5k+
40% test coverage
>90% of the software is covered with detailed and relevant test cases
Manual test execution
>400 automated scripts were created in the first 6 months
~1.5 weeks to run regression testing
3 days to run regression testing
~2 weeks to move stories forward
2x faster task handling
Test coverage:
90%
There was no formal QA process
An efficient and transparent QA process was set up from scratch
There was no test documentation
Well-structured test cases were created to cover the main scenarios
Automation scripts were useless because they didn’t check the required functionality
Following best practices, automation testing utilized independent tests, API-driven data creation, and the Page Object Pattern
Test coverage:
90%
There were no test cases.
6,000+ test cases have been created for the web apps
There were no automation scripts
2,500+ automation tests have been created
There were no mobile tests
~200 automation mobile tests have been created
There were no reporters
Allure reporter has been introduced
Test cases added:
250+
>3 hours were needed to execute manual smoke testing
<1 hour is needed to execute manual smoke testing
The test documentation was poor and irrelevant
100% of the test cases were reviewed
There was no comprehensive Postman suite for API testing
150+ Postman test cases were created and integrated into the workflow
Test coverage:
>80%
4 days were needed to run regression testing
~2 hours were needed to run regression testing
Regression tests were executed on one device only
5 various devices with different operating systems, screen resolutions, and browsers were used to run regression testing
There were less than 200 automated tests
>1000 automated tests were developed
1 browser was used
4 browsers were supported
Test coverage:
99%
There was no formal QA process
Manual story testing and verification was built from scratch
There was a lack of QA resources to test all platforms and product areas
99% of the application was covered with test cases and the smoke and regression checklists
The test documentation was incomplete
A well-structured test plan enabling thorough validation was created
Testing was executed by developers
A fully autonomous QA process was set up where QA engineers were fully responsible for testing
Test cases developed:
500+
No formal testing process
1 working day was needed to complete regression testing
<60 minutes were needed to finish smoke testing
5+ platforms were supported
5 localizations were supported
>75% of the Shops functionality was covered with regression tests
500+ E2EE test cases were created
Regression testing run:
2x
2 weeks were needed to run regression testing
<1 week is needed to run regression testing
Regression testing was performed only on 2 web devices
>6 various devices with the latest OS versions are used to run regression testing
There were no auto-tests
>150 automated tests have been created
>100 bugs have been reported
3 browsers are supported
2x faster task handling
Test coverage:
>85%
Regression testing was not executed
3 hours were needed to complete regression testing
The product contained a lot of bugs and had a bad UX
>300 functional and non-functional bugs were reported
There were no automated tests
350+ automation scenarios were created
Test coverage was low
>85% of the product was covered with automated tests
There were no test cases
350+ test cases were created from scratch
Bugs reported:
>1000
The QA process was unstructured and inconsistent
A QA team participates in creating ticket scope for upcoming releases
The previous QA team was not efficient enough
All the required test documentation has been rewritten
The test documentation was incomplete and out of date
>6,000 test cases have been written for a regression test suite for client applications
There were a lot of issues reported by clients
The DeviQA engineers have reduced the number of bugs reported by clients/therapists
Test coverage of the API:
100%
A bug-tracking system was not used
280+ regression E2EE cases have been created, which let a team reduce the time of regression testing to 1.5 days
There were no regression checklists
~40% of tests have been automated, and this value is growing
There was poor documentation coverage
100% of the core flow is covered with documentation
The tests were not automated
5+ performance suites have been created
Tests refactored:
~130
~120 failures were in a regression test run
0-1 failure is in a regression test run due to the solid test logic
Static data was leveraged
All test data is generated on the fly
There was no mechanism to clean up the data
Mechanisms for data cleaning have been created
Static test users were used
Dynamic test users are used
Test coverage :
90%
2.5 weeks to run regression testing
1 day to run regression testing
7 days to run smoke testing
1 day to run smoke testing
Outdated test cases
60% increase in the number of regression tests and relevant test cases
50% test coverage
90% test coverage
Supported localizations were not tested
27 localizations supported by QA engineers
No automation testing
>1500 automated test scenarios created
Dev capacity:
+15%
Didn’t have a formal QA process
Built smooth and well-working processes
Didn’t make automated testing
Test automation : smoke, API, UI on all supported browsers
Testing was carried out by the devs
Full coverage of the application by test cases
![](../../static/images/companies/arklign.png)
Test coverage :
97%
Existing workflow in Jira didn’t allow to understand how many bugs were left unfixed before the release and their priorities
Adjusted the Jira workflow, integrated TestRail with Jira to be able to see statisticls of the issues per release
The lack of test documentation
Developed necessary test artifacts that adhere to industry standards and are easily maintainable
No Automation test coverage
Added multithreading to run tests in parallel on 10 different threads, which reduced the time of the test run in 7 times
Regression testing time
2x
No formal testing process
2 days to run regression testing
>1500 released features
>5k of the critical/blocker/major bugs detected
>7500 test cases written
>10 trained QA specialists on the client’s side
Test coverage:
90%
A product was unstable and loaded with bugs
The product became stable and robust
There were no automated tests
Automated testing was introduced
There was a lack of test documentation
Created and maintained all required test documentation
Reported bugs:
2.5k+
A multitude of undetected bugs
> 60% high priority bugs reported
~10 specs with unit tests
1400+ E2E automation scripts created
New features were not covered by automated tests
90% of the delivered features are covered with autotests
Manually executed smoke testing
10 mins to run the entire automated smoke testing suite
![](../../static/images/case-studies-companies/cydefLogo.png)
Test cases added:
1.9k+
Non-documented infrastructure creation process
< 2 weeks to create the infrastructure for a new client
2 weeks to run regression testing
>1900 test cases added
1 month to create the infrastructure for a new client
>700 automated tests added
No test cases
98% test coverage of the desktop agent
Test Time:
20h
2h
20 hours to run regression testing
2 hours to run regression testing
10% test coverage
80% test coverage
1 machine for running tests
30 different machines for running autotests
1 thread for running tests
10 threads for running tests on each machine
Releases per day:
4
5
1 huge release every 1-2 months
4-5 releases per day
1 large team with an unclear scope of work
~15 squads each with a straight and clear working plan
0 test cases
200+ e2e test cases for the whole app
1-2 months to release a new feature
2 weeks max to release a new feature
Convoluted and complex process of urgent bug-fix releases
Clear and fast way to release any fix
Regression Testing Time:
2w
2h
The manual testing activities were time-consuming and took up to 2 weeks to complete
Automation regression testing took 2 hours instead of 2 weeks of manual testing activities
There was no formal QA process on the project
We established a formal QA process by designing and presenting it to the rest of the team for familiarization and implementation
![](../../static/images/companies/sprinklrLogo2.png)
Test coverage:
90%
The architecture of the test suite could not be scaled and was difficult to maintain for a large number of tests
Designed the architecture of the test framework from scratch
Tests couldn’t be integrated with other testing and DevOps tools
Integrated auto tests with Jenkins, TestRail, and Jira to have a complete test ecosystem up and running
Tests took many hours to be completed
Built a test suite that ran auto-tests using 16 threads on multiple machines
![](../../static/images/companies/softNas.png)
Test coverage:
80%
Test suite was developed without maintenance and scalability support in mind
We conducted a comprehensive review of the existing test suite, identified areas of improvement and implemented best testing practices
The high number of issues or defects that were present in the software
The number of issues reduced to 25 % after automation was properly implemented
Slow test results and the lack of stable automated tests, leading to hindered deployment capabilities
SoftNAS experienced several benefits, including faster test results and the acquisition of stable automated tests, resulting in improved deployment capabilities
Supported platforms:
4
Developers were responsible for testing, and they executed it chaotically
>90% of the application was covered with test cases
Many bugs were overlooked during regular test runs
100% coverage was ensured for smoke and sanity testing
There was a lack of test documentation
>300 blocker issues were detected
There were many major and critical issues in a production environment
The number of customer support issues was reduced by more than 50%
![](../../static/images/companies/therapyBrands.png)
All versions were released at the appointed time
Lack of QA resources
Created and fully stabilized the testing process
No ability to execute regression testing before each release
The first release was completed three months after we started working on the project
Prevented huge number of blockers and crashes caused by the big amount of merging
Test coverage:
95%
>70 hours to run regression testing
18 hours to run regression testing
>20 hours to run smoke testing
4 hours to run smoke testing
Not all product modules were covered with autotests
100% coverage of all existing sub-products
<100 mobile automation tests
5 generations of OS for iOS and Android and different devices supported
Test coverage:
90%
2 weeks to run regression testing
6 hours to run regression testing
Manual API testing
>2000+ automated scripts for API testing
No smoke tests
Smoke testing taking 1 hour to run
No tests were executed after developers’ PRs
A mini API test suite is executed after each PR, taking 10 minutes only
No automation testing
>4000+ automated UI + API tests
E2E tests automated:
>75
>3 hours to execute manual smoke testing
~30 minutes to execute automated smoke testing
~90% of test cases were outdated
100% test cases reviewed
No automation testing
>900 test cases are up to date and integrated into the workflow
No load tests
>600 test cases updated
No performance testing
100% of the smoke tests automated
Parallel threads:
15
Outdated BE autotests
All existing autotests are up-to-date
Only smoke tests were automated on BE
~2800 autotests added
No autotests on FE
>90% of test cases are automated
Manual smoke testing on FE
~99% of smoke tests are automated
Manual release testing
~95% decrease in post-release regression bugs, the implementation of test automation has proven to be highly effective
Bugs reported:
3000+
No formal QA process
Fine-tuned QA process set up from scratch
No automated testing
Comprehensive automation testing process implemented from scratch
No CI/CD
Transfer to a new CI/CD
>12 hours to run a regression test suite
4 hours to run a regression test suite
No parallel threads
5 parallel threads
![](../../static/images/case-studies-companies/simplifieldLogo.png)
Test coverage:
95%
Lack of test cases
>400 reported bugs
Testing was executed by team members who had no expertise in QA
>1200 reported defects
Lack of testing on real devices
>50 reported improvements
No automated testing
>3 successful releases of a new mobile application
Test coverage:
90%
No formal testing process
90% test coverage
4 localizations supported
>600 automated tests created
>10% of the translations are covered with automated tests
Test coverage:
95%
~2000 autotests
~3000 autotests
No performance autotests
k6-based autotests were integrated into a release flow
Low API usage for test data setup
~90% of test data is generated by API, resulting in 2.5 times reduction in execution time
Outdated Cypress version was used
The latest version of the Cypress framework is used
Test coverage:
90%
Unstructured QA process
<10 mins to run automated smoke testing
No automated testing
<1 hour to run automated regression testing
Manual execution of smoke testing on FE
~90% of the app is covered with test cases
Manual execution of regression testing
>80% of the delivered features are covered with autotests
Test coverage:
75%
Unstable software product with glitches and defects
1.3k automated scenarios in total
Manual test execution
1.5 hours to run automated regression testing
No regression testing
2 applications are covered with automated regression tests
No CI environment
1k+ test cases created
Test cases created:
2.5k+
40% test coverage
>90% of the software is covered with detailed and relevant test cases
Manual test execution
>400 automated scripts were created in the first 6 months
~1.5 weeks to run regression testing
3 days to run regression testing
~2 weeks to move stories forward
2x faster task handling
Test coverage:
90%
There was no formal QA process
An efficient and transparent QA process was set up from scratch
There was no test documentation
Well-structured test cases were created to cover the main scenarios
Automation scripts were useless because they didn’t check the required functionality
Following best practices, automation testing utilized independent tests, API-driven data creation, and the Page Object Pattern
Test coverage:
90%
There were no test cases.
6,000+ test cases have been created for the web apps
There were no automation scripts
2,500+ automation tests have been created
There were no mobile tests
~200 automation mobile tests have been created
There were no reporters
Allure reporter has been introduced
Test cases added:
250+
>3 hours were needed to execute manual smoke testing
<1 hour is needed to execute manual smoke testing
The test documentation was poor and irrelevant
100% of the test cases were reviewed
There was no comprehensive Postman suite for API testing
150+ Postman test cases were created and integrated into the workflow
Test coverage:
>80%
4 days were needed to run regression testing
~2 hours were needed to run regression testing
Regression tests were executed on one device only
5 various devices with different operating systems, screen resolutions, and browsers were used to run regression testing
There were less than 200 automated tests
>1000 automated tests were developed
1 browser was used
4 browsers were supported
Test coverage:
99%
There was no formal QA process
Manual story testing and verification was built from scratch
There was a lack of QA resources to test all platforms and product areas
99% of the application was covered with test cases and the smoke and regression checklists
The test documentation was incomplete
A well-structured test plan enabling thorough validation was created
Testing was executed by developers
A fully autonomous QA process was set up where QA engineers were fully responsible for testing
Test cases developed:
500+
No formal testing process
1 working day was needed to complete regression testing
<60 minutes were needed to finish smoke testing
5+ platforms were supported
5 localizations were supported
>75% of the Shops functionality was covered with regression tests
500+ E2EE test cases were created
Regression testing run:
2x
2 weeks were needed to run regression testing
<1 week is needed to run regression testing
Regression testing was performed only on 2 web devices
>6 various devices with the latest OS versions are used to run regression testing
There were no auto-tests
>150 automated tests have been created
>100 bugs have been reported
3 browsers are supported
2x faster task handling
Test coverage:
>85%
Regression testing was not executed
3 hours were needed to complete regression testing
The product contained a lot of bugs and had a bad UX
>300 functional and non-functional bugs were reported
There were no automated tests
350+ automation scenarios were created
Test coverage was low
>85% of the product was covered with automated tests
There were no test cases
350+ test cases were created from scratch
Bugs reported:
>1000
The QA process was unstructured and inconsistent
A QA team participates in creating ticket scope for upcoming releases
The previous QA team was not efficient enough
All the required test documentation has been rewritten
The test documentation was incomplete and out of date
>6,000 test cases have been written for a regression test suite for client applications
There were a lot of issues reported by clients
The DeviQA engineers have reduced the number of bugs reported by clients/therapists
Test coverage of the API:
100%
A bug-tracking system was not used
280+ regression E2EE cases have been created, which let a team reduce the time of regression testing to 1.5 days
There were no regression checklists
~40% of tests have been automated, and this value is growing
There was poor documentation coverage
100% of the core flow is covered with documentation
The tests were not automated
5+ performance suites have been created
Tests refactored:
~130
~120 failures were in a regression test run
0-1 failure is in a regression test run due to the solid test logic
Static data was leveraged
All test data is generated on the fly
There was no mechanism to clean up the data
Mechanisms for data cleaning have been created
Static test users were used
Dynamic test users are used