Our Android Way to Fail-Safe Automated Test Runs Through Data Snapshots

by Valentin Martinet
Software Engineer Android

At Runtastic, our goal has always been to support our users in being the best version of themselves through high-quality health and fitness apps. We also want to provide our users with a seamless user experience, which of course also means as few bugs as possible.

In the past, we relied heavily on manual tests to ensure the quality of our mobile products, mainly because we hadn’t found a maintainable approach to UI tests. Also, the team was smaller, so the lack of tests on this level wasn’t a big issue for us. But things changed when the size of the team and the products grew. The costs for not having automated testing increased. That’s why we started an initiative at the beginning of 2017 to build up our automated tests in our mobile apps.

That’s when we gradually started to replace manual tests (done by our QA team) with automated tests since their benefits are undeniable. We are not going to cover all of the benefits, but let’s just mention that despite the initial effort of writing them, they save you precious time in the long run. At Runtastic, we rely on them in order to ship our features faster and with higher confidence. There are multiple types of test which we write to ensure our app is working properly. In this blog post, we will focus on the automated tests which test a user flow as compared to when a user uses the app (a.k.a UI test or End-To-End test).

Issues

Once we wrote our first automated tests and made them run in our continuous integration system on a regular basis, we found out that they were failing in an abnormal way as compared to when we ran them locally on physical devices. Although having a lot of tests is good for the coverage of the app test-wise, it can also be error-prone since the app state (shared preferences and database) will be shared within consecutive test runs and might lead to unexpected behaviors which eventually make the test fail.

Let’s consider our bodyweight training app Results. The usual testing process consists of logging a test user into the app and starting testing. Several test scenarios can lead to a failure during consecutive runs:

  • A health warning screen is shown to users before their very first workout with the Results app. Once it is acknowledged, it won’t be shown again, and we save this as a flag in the app settings. If the app data is cleared on purpose, or if the app gets uninstalled and reinstalled, this flag is reset, and the screen will be shown during the user’s next workout. Any test which doesn’t manage the likely appearance of this screen will fail when being run.
  • We want to ensure that the user progresses through the 12-week training plan as expected. We thus have a test that performs the last workout of the first week and asserts that the transition to the second week has occurred as expected. While this is easily testable, the problem is that it won’t work on the next test run because the user’s progress is already saved to the database. Thus, on the next test run, the last workout of the first week is already done, and the user’s progress is already in the second week whereas the test expects it to be still in the first week.
  • For each of our tests, we need to log a test user into the app as the main feature is a personalized training plan. This process affects the duration of the test execution as we have to fill in the login form and wait for our pre-production backend to send back an access token. Furthermore, in the case of any outage in this environment, all test runs will fail as well because they are network-dependent.

Of course, it is possible to tackle some of those issues with boilerplate code to rollback training plan progress, ensure specific screens don’t show up, etc. However, it doesn’t fit our large and growing codebase of tests, it is time-consuming, and it will require a lot of effort to maintain over time as we are constantly developing new features. It was clear to us that we needed to tackle the above issues in a developer-friendly way to improve the robustness of our test runs.

Wouldn’t it be great if we could run each of our tests with a specific app state that will ensure that the developer can fully focus on the functionality being tested without having to handle test-related issues?

Solution

The solution we came up with is to save a snapshot of the current app state (specific to a test scenario) and make use of it before a test run. A database dump is created, and the shared preferences of the app are exported as well as the user access token (which will help us to skip the login step and be network-independent). All this data is zipped into a file which we add to the assets folder of the Android Studio project and then it is imported by a test class before the tests contained within are run.

So how do we get there technically? Here is our shopping list:

  • To snapshot the current state of the app to be tested and create the app data zip file, we need a helper class taking as input a list of files to be exported.
  • Importing the data is done by a Java annotation to which we pass the zip file as a parameter. This annotation is added to the test class, and it is basically all the developer needs to get tests running on a snapshot-ed app state.

In the following paragraphs, we detail our (simplified) implementation of this process from exporting the app data to using it in a test.

Export app data

We need several helper methods to export the app data. First, retrieving the app data files :

public static File[] getPackageDir(Context context) {
  if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
    return context.getDataDir().listFiles();
  } else {
    return new File(context.getApplicationInfo().dataDir).listFiles();
  }
}

Then, we need a file filter in order to include shared preferences and databases only from the app data :

public static FileFilter getDefaultFilter() {
  return file -> file.isDirectory()
    || (file.getAbsolutePath().contains("shared_prefs")
    || file.getAbsolutePath().contains("databases"));
}

Finally, we are able to create the desired zip file using the above helper methods and the below code that we use in an Activity helper:

File destFile = new File(context.getFilesDir().getAbsolutePath() + File.separator
+ "shared_files" + File.separator + "appdata.zip");
ZipOutputStream zos = new ZipOutputStream(
new BufferedOutputStream(new FileOutputStream(destFile)));
try {
  for (File file : getPackageDir(this)) {
    zipFile(zos, file, null);
  }
} finally {
  zos.finish();
  zos.close();
}

private void zipFile(ZipOutputStream zos, File file, String dir) throws Exception {

  if (file.isDirectory()) {
    String newDir = dir == null ? file.getName() : dir + File.separator + file.getName();
    for (File f : file.listFiles(getDefaultFilter())) {
      zipFile(zos, f, newDir);
    }

    return;
  }

  ZipEntry entry = new ZipEntry(
dir == null ? file.getName() : dir + File.separator + file.getName()); 
  zos.putNextEntry(entry);

  byte[] bytes = new byte[1024 * 10];
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file));
  int read;

  while ((read = bis.read(bytes)) != -1) {
    zos.write(bytes, 0, read);
  }
  
bis.close();
  zos.closeEntry();
}

Importing app data

Once the zip file is exported, we move it to the assets folder of the Android Studio project. Importing the data is then achieved thanks to the helper class below: it handles the unzipping process, moves the unzipped data to the file system, and sets the shared preferences.

public class DataImporter {

  public static void importData(Context context, File fileToUnzip) throws Exception {
    // current database must be closed first
    context.getContentResolver().query(/* <your_close_db_uri> */, null, null, null, null);

    final File unzipDir = new File(fileToUnzip.getAbsolutePath()
        .substring(0, fileToUnzip.getAbsolutePath().lastIndexOf(".zip"))
        + File.separator);

    unzipDir.mkdirs();

    try (ZipInputStream zis = new ZipInputStream(new FileInputStream(fileToUnzip))) {
      ZipEntry entry;
      while ((entry = zis.getNextEntry()) != null) {
        if (entry.isDirectory()) {
          new File(unzipDir + File.separator + entry.getName()).mkdirs();
        } else {
          final String entryName = entry.getName();
          int index = entryName.lastIndexOf(File.separatorChar);
          if (index != -1) {
            File dir = new File(unzipDir + File.separator + entry.getName().substring(0, index));
            dir.mkdirs();
          }
          FileOutputStream fout = new FileOutputStream(unzipDir + File.separator + entry.getName());
          unzipFile(zis, fout);
        }
      }
    }

    final String moveTo = context.getFilesDir().getParentFile().getAbsolutePath();
    for (File file : unzipDir.listFiles()) {
      moveUnzippedData(context, file, moveTo);
    }

  }

  private void moveUnzippedData(Context context, File file, String dir) {
    if (file.isDirectory()) {
      String newDir = dir + File.separator + file.getName();
      for (File f : file.listFiles()) {
        moveUnzippedData(context, f, newDir);
      }

      return;
    }

    if (file.exists() && file.getPath().contains("shared_prefs")) {
      readSharedPrefs(context, dir, file);
    } else {
      File newFile = new File(dir + File.separator + file.getName());
      newFile.getParentFile().mkdirs();

      if (newFile.exists()) {
        newFile.delete();
      }

      file.renameTo(newFile);

      // necessary due to database access (and probably all other files as well)
      newFile.setWritable(true, false);
      newFile.setReadable(true, false);
    }
  }

  private void unzipFile(ZipInputStream zis, FileOutputStream fout) throws Exception {
    byte[] buffer = new byte[1024 * 4];
    int readBytes;

    while ((readBytes = zis.read(buffer)) != -1) {
      fout.write(buffer, 0, readBytes);
      fout.flush();
    }

    zis.closeEntry();
    fout.close();
  }

  private void readSharedPrefs(Context context, String destDir, File originFile) {
    String originalFileName = originFile.getName().substring(
        0, originFile.getName().lastIndexOf("."));
    String newFileName = originalFileName + "_TMP_" + System.currentTimeMillis();
    File newFile = new File(destDir + File.separator + newFileName + ".xml");

    // create the dir if not yet available + move the unzipped file to this dir
    newFile.getParentFile().mkdirs();
    originFile.renameTo(newFile);

    newFile.setWritable(true, false);
    newFile.setReadable(true, false);

    String fileName = newFile.getName().substring(0, newFile.getName().lastIndexOf("."));
    SharedPreferences prefs = context.getSharedPreferences(fileName, Context.MODE_PRIVATE);

    // open the current shared prefs file (or create, if not yet available)
    final SharedPreferences.Editor edit = context.getSharedPreferences(originalFileName, Context.MODE_PRIVATE).edit();

    // clear all current prefs (if there are any)
    edit.clear();
    
    // add all shared prefs
    for (Map.Entry<String, ?> entry : prefs.getAll().entrySet()) {
      if (entry.getValue() instanceof Boolean) {
        edit.putBoolean(entry.getKey(), (Boolean) entry.getValue());
      } else if (entry.getValue() instanceof Float) {
        edit.putFloat(entry.getKey(), (Float) entry.getValue());
      } else if (entry.getValue() instanceof Integer) {
        edit.putInt(entry.getKey(), (Integer) entry.getValue());
      } else if (entry.getValue() instanceof Long) {
        edit.putLong(entry.getKey(), (Long) entry.getValue());
      } else if (entry.getValue() instanceof String) {
        edit.putString(entry.getKey(), (String) entry.getValue());
      } else if (entry.getValue() instanceof Set) {
        edit.putStringSet(entry.getKey(), (Set<String>) entry.getValue());
      }
    }
    edit.commit();

    // delete the unzipped file (tmp file)
    newFile.delete();
  }
}

At that point, the only thing missing is the glue between our test and the data import process. To ease the developer’s life, this is achieved by annotating the test class with a custom Annotation taking as input the previously created zip file. We have an abstract class which is extended by all of our test classes and checks for this annotation to import data if needed.

public abstract class BaseAutomatedTest {

  public BaseAutomatedTest() {
    String appDataFile = null;
    RuntasticTest annotationRt = this.getClass().getAnnotation(RuntasticTest.class);
    if (annotationRt != null) {
      appDataFile = annotationRt.appData();
    }

    if (appDataFile != null && !appDataFile.isEmpty()) {
      setAppData(appDataFile);
    }
  }

  @Override
  public void setAppData(String assetFile) {
    try {
      DataImporter.importData(InstrumentationRegistry.getTargetContext(), /* <your_zip_file> */);
    } catch (Exception e) {
      e.printStackTrace();
    }
  }

  @Override
  @After
  public void cleanAppData() {
    // Optionally, you could clean the database and app data here
  }
}

@Inherited
@Retention(value = RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface RuntasticTest {
  String appData() default "";
}

In the end, the developer’s only job is to annotate the test class as shown below and voilà!

@RuntasticTest(appData = "premium_user_week1.zip")
public class TrainingPlanWorkoutUiTest extends BaseAutomatedTest {
…
}

The above test, when being run, will import the specified shared preferences and databases which have been exported beforehand. The test is thus more robust as the app state is known and tailored to the test scenario.

Conclusion

Based on this mechanism, we can easily develop and run tests with great confidence. As we no longer have to care about the app state on which a test run, it saves us precious time in developing features, and it definitely improves our test robustness.

Have you ever encountered these kinds of testing issues? If yes, how do you tackle them? Do you have a different approach or have you ever developed similar tools? As sharing is caring, we would love to hear about them and get your feedback regarding our approach. Feel free to drop us a line in the comment box below and share this article to enjoy healthier and stronger tests runs ?

***

Tech Team We are made up of all the tech departments at Runtastic like iOS, Android, Backend, Infrastructure, DataEngineering, etc. We’re eager to tell you how we work and what we have learned along the way. View all posts by Tech Team