How We Reduce Regression Testing to 48 Hours

Written by Manuel Kaiser, Software Tester

Imagine any repetitive task you are working on. If you increase the time spent on the task, you would expect a better output — “better” in terms of more quantity or “better” in terms of higher quality. However, if the quality does not increase, you start asking yourself: Is there a more effective way to accomplish this task?

We, the QA team at Runtastic, experienced this exact situation when smoke testing our apps. We use the term “smoke test” for the very last testing phase of app builds before we release them. This kind of test basically covers the most important app functionality. We learned that each release needed more time for testing. Even though we took the time, the quality did not increase proportionally.

Challenges Within the Smoke Test

To address this, we evaluated the existing smoke test. In the first step, we identified three key challenges:

  • Is it really beneficial to do two rounds of testing on each platform?
    Until then we had tested each release candidate on both an old version and the latest version of each operating system. So we always did four rounds of testing in total — two on iOS and two on Android.
  • Is it really helpful to have the same setup for each round of testing? Could some tests be redundant?
    Until then we had used the same set of test cases for each test and didn’t think of which parts of the app were really affected by the changes in the version that had to be tested. If there were only improvements in the training plan section, for example, why would you also test the app’s newsfeed?

Is our way of handling the smoke test the best way to obtain the highest level of quality in the app?
Until then we had used written test cases in order to get the smoke test done. One of the biggest benefits of written test cases was that even beginner testers could accomplish a test round. On the other hand, using test cases was monotonous for experienced testers and introduced the risk of missing painful bugs.

Focus testing on parts that were updated

After identifying the challenges, we moved on to create a new way of smoke testing. This started with renaming the smoke test to regression test. Why did we rename it to this? Our industry defines regression testing as the kind of testing that makes sure that the functionality of the software is still there after adding changes to this software. However, smoke testing focuses on finding faults in the core functionality which block a future software update. Bearing this terminology in mind, regression testing better covers what we want to achieve with this kind of testing: Our goal is to guarantee that the parts of the app which were affected by the changes still  work.

Test with user flows instead of test cases

In order to avoid redundant test cases and to make testing more exciting, we decided to give up on written test cases and switch to a mindmap which displays user flows for most features that can be found in our apps. A user flow is a clear way to answer the following question: What does a user have to do to get things done within our app? Example: What is necessary to do a run with the Runtastic app where the user wants to be followed by his/her friends?

A different device for each flow

So there was one challenge left: Two rounds of testing on each platform. It was clear to us that the users’ experience in our apps is different on a device which runs on Android 5 compared to a device using Android 8. We overcame this challenge by testing each flow on a different device or — to be more precise — with a different version of the operating system. We decided to keep the separate testing for iOS and Android. We test a similar set of flows on each platform, but within every platform we use different devices.

Get testing done within 48 hours

The new testing method was a good start. But an even better start was the new way of testing with a time limit. We agreed to execute the regression test within 48 hours because if we could do it within 48 hours, we would be sure that this is the right way to handle this kind of testing in the future when we want to ship app updates in shorter time intervals.

Squad feedback on the new process

So far we have done a couple of releases with the regression testing new method. It is now time to evaluate the process. One way to evaluate the process of regression testing is to gather feedback from people who are involved in the process — testers and test managers.

Test managers are part of the squad. Their feedback reflects the squad’s opinion on our new regression testing. It was mentioned before that we also expected a more appropriate way to handle shorter time intervals between releases. This was the squad’s feedback as well. We all agree that our new testing setup next to test automation is the right tool when it comes to quality within shorter time intervals between releases. Another positive aspect mentioned by the squads is the consideration of user flows. It makes testing much more realistic.

Tester feedback

Testers are really fond of the new process. They are no longer bored by doing the same test steps with a wall of text over and over again. They now have the option to be creative and come up with new approaches for testing the app. 

Room for future improvements

Even though the new process is a big step forward for us with testing our apps, there is still room for improvement. We as QA have to work on establishing a better understanding about the impact of changes for a release. Only by better understanding which features of the app are affected by changes can we select the appropriate user flows for testing. With the right user flows, the new process of regression testing will make it possible to release updates for our apps in shorter time intervals.

***

RATE THIS ARTICLE NOW

Runtastic Tech Team We are made up of all the tech departments at Runtastic like iOS, Android, Backend, Infrastructure, DataEngineering, etc. We’re eager to tell you how we work and what we have learned along the way. View all posts by Runtastic Tech Team »