Flakiness Tolerance & Retries
Last updated
Last updated
Flakiness is a term for tests that fail sporadically. Tests deemed flaky produce different results upon each execution and sometimes fail due to unpredictable conditions (e.g. infrastructure outages, network errors, race conditions, etc.). We've built in flakiness tolerance into our product, and are continually looking for ways to improve this.
This functionality is split into three parts:
No waits. The Maestro framework does not rely on traditional "sleep" commands. Instead, animations and UI elements are intelligently waited for completion before the next step is initialised. This can be done implicitly (e.g. tapOn has an inbuilt timeout) or explicitly with commands such as waitForAnimationToEnd. In traditional end-to-end frameworks, wait logic is a major cause of flakiness.
Cloud authoring. A significant cause of flakiness is the differences between the local test creation environments and cloud-based execution environments. Moropo's Test Creator uses the same virtual devices for test creation as it does for test execution. Therefore minimising flakiness caused by environmental factors.
. Moropo will automatically retry any failure that could be related to infrastructure. This minimised the chance of our system introducing flakiness.
One method of combating flakiness is to retry tests when a failure has been observed. This can help the pass rate of any tests that are flaky but may hide potentially useful information about the reasoning behind the flakiness. To provide useful feedback related to the underlying code base and the potential issues within, tools must be provided to expose and help analyse flaky results to determine the root cause of the flakiness.
We at Moropo provide a simple mechanism to combat flakiness - we retry a test when the problems relate to our infrastructure. We retry once, and if the test fails again we then publish the result as a FAIL.
The reason we do this is to prevent hiding valid issues related to a test or a build (i.e. things the user controls) behind a retry mechanism. We are looking to improve our toolset here in the future by giving the user a toolset to help identify and combat flakiness.
We retry tests when there is a problem with our infrastructure. This can mean;
Runners
Emulators
OS instability
etc.
We DO NOT retry a test which has failed due to an issue with;
Tests
Builds