LogoDeviQA is the finalist of the Software Testing Award 2019

Solebit (MimeCast)

Automated Testing, Web Testing, Performance Testing, Software Testing Consultancy

Project Overview

What Solebit (MimeCast) had in the testing processes when they came to us and what they got after they started working with us.

Before improvement

Every time a developer triggered the tests, it was by hand in a terminal
The team had a big and complicated file with the results
Tests took more than 20 hours for completion
The history of the test runs was unavailable
The test suite architecture was unscalable
UI tests were absent
All test results were missed if some tests failed due to technical reason

After improvement

We reduced the build time by a factor of 10
Configured how to run tests in simple way without the involvement of a tech person
Created a reporting system that allowed the team to view clean reports even if the results contained 10k rows
Configured file processing selection on the number machines (Azure Cloud, Google Cloud).
Autotests for uploading files were running in parallel on 30 different machines, with 10 threads on each machine.
Test cases covered 80% of the application
We added UI tests that were triggered automatically after each deploy
We stored the main test result file and also stored results for each machine simultaneously
25,000
auto tests developed
6+
platforms were covered by auto tests
10x
faster testing time
10
parallel threads
500K+
files uploaded
30
machines used
QA Team:
4 Senior SDET
Project length:
4 years
Technologies & Tools
Ruby
Faraday
Cucumber
Linux
REST API
Azure Cloud
Goggle Cloud
Docker
LXC containers

The Challenge

Solebit (MimeCast) provides identification and prevention of zero-day malware and unknown threats.

From the moment when we joined the project, there were automation tests designed and developed by the in-house development team. Tests were not useful because every time a developer triggered the tests, it was by hand in a terminal. As the result, the team had a big and complicated file with the results.

Also, the tests took more than 20 hours for completion, and the history of the build was also unavailable.

The test suite architecture was unscalable, making it difficult to maintain a large number of test machines.

The task was to create simple runner for tests and increase their speed. We learned some valuable lessons. We should have redesigned the architecture to support cloud platform integration, made the tests run much more easily, and generated a clean report with all the necessary details.

At the same time, we should propose the improvements for current testing process and made some manual testing, too.

Achievements

DeviQA created complex and detailed automated scenarios for testing REST APIs using the Faraday library. We built a software development kit application that worked on various machines, including Azure, Google Cloud, Docker, and both privileged and unprivileged LXC containers. The QA team created complex methods for these integrations using SSH and SFTP connections. Altogether, we designed and developed more than 25,000 tests. Every test ran 500,000 files which were then uploaded.

We reduced the build time by a factor of 10 and configured how to run tests in simple way without the involvement of a tech person. We also created a reporting system that allowed the team to view clean reports even if the results contained 10k rows. Finally, we configured file processing selection on the number machines (Azure Cloud, Google Cloud).

Services Provided

Most of our work was focused on the automated testing of REST APIs, auto sending emails through SMTP, DevOps, and configuration activities. Autotests for uploading files were running in parallel on 30 different machines, with 10 threads on each machine.

Automated Testing

We also tested a UI part of the application and prepared comprehensive test documentation. These test cases covered 80% of the application, and we created test suites for each release for different customers.

Web Testing

We designed and developed performance/load and stress tests and then integrated them into CI. In addition to checking the ability system under pressure, we also checked how fast the system was up and running after a crash.

Performance Testing

Before we start a long-term journey with the client, we proved our expertise by performing a QA audit. We identified thin places in infrastructure, including non-flexible machines, a low machine count for checking, a necessity to clean up previous application versions, extended machine creation time before build. We provided clean and simple ways to fix these issues.

Software Testing Consultancy