The addition of new code to an existing codebase is usually followed by one or more tests to hunt down regression errors and other kinds of software bugs. Such tests are extremely crucial to the software development life cycle because they catch and prevent errors from reaching end-users in a production environment.
As shown in the test pyramid below, different kinds of automated tests come with different repercussions, it is the duty of a good test engineer to figure out optimal solutions:
In this article, we will focus on some challenges of automated UI testing and consider scenarios where alternative solutions prove more optimal in testing the user interface of a web application.
Why test the UI?
Contemporary web applications provide a user interface as the main layer of interaction between users and the application. The UI usually complies with design standards and follows worldwide accepted design conventions to enable users to navigate easily and execute complex actions.
To achieve an optimal level of code coverage, the UI is subject to rigorous testing. A common problem emerges when test engineers transitioning from writing manual tests explore the world of automation but bring in some manual test habits.
Manual UI testing employs the human eyes and hands in asserting the state and output of one or more UI components.
The human way of interacting with the UI is incredibly slow and complex and can seem nothing like what happens when an automated test (can run 100x faster) is executed to carry out the same assertion.
Some challenges of automated UI testing
The test pyramid we discussed earlier suggests the logical integration of different levels of automated tests to create a balance. Instead of solely sticking to automated UI tests (where they might not deliver desired results), it can be more efficient to employ lower-layer tests in the right scenarios; they can accurately test underlying operations, such as the back-end logic.
An interface that makes API calls to a second application and depends on some response values
An automated UI test at its simplest form is a simulation of one or more user actions. We want to write tests that reflect the actions that a user is capable of executing without direct access to the methods in the code-base.
User interfaces are more complex than ever before and a user may have to interact with another application or browser window to complete an action. In such a case, the success of a corresponding automated test is beyond asserting the presence of a UI element. Properly written API tests would be a better solution than a non-deterministic UI test that fails randomly.
An interface that supports the execution of long chains of gesture actions
On smaller screens, web applications may be optimized with complex UI workflows that allow different kinds of interaction gestures such as clicking, swiping, spreading, scrolling, and pinching.
An automated UI test may become unpredictable when it is built on several chains of UI actions that directly depend on multiple gestures. The unpredictability can be a result of the improper execution of one action, hence returning an erroneous value to the rest of the actions in the test chain.
Such a flaky UI test can be replaced by one or more unit tests that synchronously wait for the completion of the execution of one action, verify the returned data and proceed on to the next test.
Handling multiple errors
It is challenging to write automated UI tests that handle errors effectively. The way humans interact with the user interface is complex and tests must create room for complex scenarios that can lead to errors. Some test engineers complement automated tests with manual tests because some errors have to be manually revoked, while other sections can be automated completely.
Reusing the same data-set for every UI test can lead to unexpected results
Applications with an authentication wall may display different variations of the UI based on certain conditions:
Is the user logged in?
Is this a premium account?
Is the user of age to see this section?
When an automated test suite is run against a data-set or database, some tests may alter the fields of the data-set to complete their assertions. This has the effect of changing the variation of the UI that is returned. An automated test suite that has many of the fields of its data-set altered becomes difficult to predetermine the successful execution of its future tests.
A workaround would be to start each test with a clean data-set but that might affect the execution time of the overall suite. This is another scenario where a different kind of test can be written to represent the underlying logic of the user interface. Rather than having to rely on the presence of a UI element that might be altered by the state of the data-set, an API test can request for the fundamental data and make the same assertions based on the received results.
“The Archduke Leopold Wilhelm in his Painting Gallery in Brussels” By David Teniers the Younger (circa 1673). Medium: oil on copper.
Conclusion
Automated UI testing offers a list of potential solutions, ranging from finding regression errors to preventing newly written code from introducing new software bugs. However, automation engineers may be tempted to write some “problematic” tests because of the lack of knowledge of what to test.
There is also the problem of automated UI tests not always being the solution to testing every UI element. There might be better alternatives that promise more accurate results and do not involve the user interface but instead rely on the back-end logic which serves as the skeletal workflow for the UI’s structure.
Teams switching from manual tests should consider the scenarios in this article as they will help in deciding how to structure their tests.