Logo
/ mobile-testing

Mobile Automated Testing of the Messenger with an Encryption Algorithm

By DeviQA on Fri Apr 26 2019 00:00:00 GMT+0000 (Coordinated Universal Time)

This article is a brief guide to what you may encounter during automated testing of a mobile application like instant messenger. In this article, we will talk about:

1.

What approach we used and why.

2.

Which issues you may face with while running tests on several devices at the same time during the automated test interaction between the devices.

3.

Connecting new devices or replacing existing ones.

4.

How to do test logging from multiple devices.

5.

How to set up the algorithm for restarting failed tests to eliminate instabilities.

6.

How to make tests more structural.

7.

How to solve problems with unstable initialization of WDA and saving all test information in one place.

These are some of the challenges you may be faced with during mobile app testing of instant messenger. We are ready to share our solutions.

Project Requirements

The main task was to write automated scripts for the existing huge base of test cases that fully cover the functionality of the whole application. Also, the requirement was to use real devices instead of emulators.

Project Overview

We will cover all test cases with stable, reliable and quick automated tests for the mobile application. The application is a modern instant messenger with an advanced encryption algorithm. It allows you to exchange messages and audio calls and has a huge number of settings for convenient and secure data transfer.

Our Solution

To implement it, we decided to use the time-tested technology stack: Java + Appium + TestNg + Allure. Appium uses standard API's, which makes it a good choice when developing tests for both Android and iOS applications. To make our test solution even more robust and effective, we provided and developed a continuous integration solution based on Jenkins. We used Jenkins CI, because it is a flexible tool with a large set of custom plug-ins that gives us the ability to integrate any tech stack in an appropriate way. Our CI solution worked based on the following algorithm:

CI Process of DeviQA mobile automation testing

For the reporting, we chose Allure. The reports generated by this framework look very clear and modern with a possibility of customization and flexible settings. The report provides a lot of useful information about the test like the duration, the link to the test case, the body of the test, which includes all the steps and their duration, as well as screenshots from each device if the test fails. An important addition is the launch story, which makes it possible to conclude that a particular test is stable. Also, there is information about all retries with screenshots and error descriptions which allows you to determine exactly where the test is unstable and needs attention.

Report1
Report2

Challenges:

1.

Execute test using at least 3 real devices

During the development process, one of the difficulties was to make the Appium server automatically rise to have an opportunity to connect at least three real devices in parallel. Since the main function of the application is messaging and calls, we needed to use several devices to check that the message was sent from one to another, or raise conference calls, etc. To make this happen, we decided to use a multi-thread architecture that runs several instances of Appium in one period of time and a controller that synchronized operations between each device.

2.

Connecting a new device or replacing an existing one

One of the key problems was that the user had to connect new devices or replace existing ones with minimal effort. We solved this problem in the following way: the information about all devices is stored in a separate config file that is parsed in a loop, so in order to add or remove a device, you only need to add or delete a couple of lines from the config. All the necessary capacities for initialization of the driver are taken from config: platform version, WDA port, UDID, etc. Each driver that runs a separate device is saved to the map and is available during test execution.

3.

Tests logging

Another problem was logging. In the end, we had several drivers, each of which was associated with a separate device. We had to solve the problem of how to get a readable and understandable log, which could be easily used and read. We decided to use annotations and a special logger class that set individual logs for each driver. It allowed us to significantly improve the structure of the logs. Moreover, every non-tech person could easily understand all the information about the behavior of the tests.

4.

A restart of failed tests

Another important task was to set up the algorithm for restarting the failed tests to eliminate instabilities. For this, we used a transformer of test annotations. Also using the annotations, we divided the tests into different categories, which gave our client the opportunity to run tests depending on the functionality that needs to be checked. So here, we actually resolved two issues: improved the stability of the tests and made them more configurable.

5.

Unstable initialization of WDA

At the beginning of development, we encountered the problem of unstable initialization of WDA. We solved it by adding additional features to the IOS mobile driver. In view of the fact that the mobile application often interacts with external applications, it was decided to initialize the driver before each test and restart the application. But this had one side effect: due to the repeated initialization of the WDA, the driver crashed very often. So, to solve this problem, we added the ‘WDA_STARTUP_RETRIES’ capability which increased the driver initialization time, but allowed to run tests stably.

6.

Store all test data in one place

Also, in order to store all the test data in one place, a special data provider class was developed. The class parses the .xlsx document and then transfers the data to the desired test. It makes the test suite more configurable and maintainable as you have to make a change in only one place, which then will be applied on the entire test suite.

Project scope

While working on mobile automation testing we constantly communicated with the QA department on customer’s side. QA lead participated in the formation of a stack of test cases for automation for each sprint. We built a transparent and efficient communication process with the QA and Development department on the client’s side. We started with the most important cases first and after all critical/major test cases were covered, we moved our focus to low priority cases. This approach helped our client to get as many benefits as possible from our automation tests. From the first week of our automation development, our client got smoke coverage for his application. All main features of the application were covered by automated tests, which allowed our client to reduce the time for performing manual regression testing significantly. Before the implementation of automated tests, it took 3 days of manual work to make a complete smoke testing cycle and now it takes less than 4 hours.

Results in numbers

For the development of autotests, 3 full-time employees were involved. In over 12 months of work (~ 5,000 working hours) more than 1000 tests were developed, 1000+ test cases designed, and 100+ bugs reported.