Test automation conventions
Last modified on Mon 02 Feb 2026

Frameworks

Keep your framework lightweight. It should be easy to get around, easy to understand, and easy to maintain.

In practice, a lightweight framework usually means:

Signals that the framework is becoming too complex:

A short README or onboarding document that explains the main folders and how to run a subset of tests is a must.

Project structure

When working on UI automation, a design pattern such as Page Object Model (POM) should be used when structuring the project. Similar ideas (separating concerns, centralizing access to external systems) can be applied to other types of automated tests as well.

With POM implemented:

Page objects

Page objects should only contain locators for that specific page/screen.

Read the locators article for more details on using locators.

Methods related to those locators should also be located on the same page as the locators they are using. The structure can be further split by separating methods into a separate class, e.g. suffixed by "_actions". That class should only contain methods related to the screen (or feature) they refer to.

The exception are locators and actions that are shared across multiple pages, such as dialog windows. Those can be in their own class.

Avoid adding assertions or complex business rules inside page objects. Page objects should describe how to interact with the UI (click, type, read values), while tests decide what to verify and what the expected behavior is.

Tests

Tests should be:

NOTE:

Independent tests

Ideally, you want to write independent tests.

Independent tests:

However, you might not be able to follow best practices on every single project.

When considering the difference between a web and a mobile project, it is obvious the way the apps work is different. While on a web page you can often jump directly to a specific URL, on a mobile app you usually have to follow a certain flow before getting to the desired screen.

Something worth considering is the time it takes for the tests to run and the cost of test automation. By adding more tests, the more time it will take for them to run. If you need real hardware for your tests and you are limited by the number of devices you have, your test suite will not scale easily. Additionally, preparing the initial state for a test might take too much time. If each test needs 10 minutes of setup before it even reaches the part you want to check, you might not be able to run many tests in a desirable amount of time.

To keep tests as independent as possible, especially on mobile, consider:

Sometimes it still makes sense to have a “flow” test suite that intentionally follows a long user journey end to end (e.g., onboarding or checkout). In those cases, treat that suite as a smaller group of slower tests and keep the majority of your tests independent.

What to automate?

Be careful not to fall into a trap of just adding tests to report back the number of newly added ones. Having a lot of tests does not mean we are doing it right.

After a few months or years on the project, you might end up with hundreds of tests, if not more. If you just keep adding more tests without being careful about their stability, they will become harder to maintain. If you make a few mistakes along the way, those mistakes might pile up and be impossible to fix at some point. Some tests might just be too difficult to automate and not bring any value. Others might be flaky and ask for more time to maintain them than it took to write them.

Questions worth asking before/during test automation:

A short checklist to evaluate whether a scenario is a good candidate for automation:

As a rule of thumb:

When starting with test automation, the focus should be on:

Afterwards, depending on the project or when you cover all the existing features, you could continue with covering new functionalities.

Flaky tests

At some point, you will write a test that works okay for a while, but then it starts misbehaving — sometimes passes, and then sometimes fails.

Before you delete a flaky test, try the following:

If you end up updating and tweaking the test every once in a while and simply cannot get it to work properly, consider removing that test. Otherwise, it will only cause you headaches and take up the time that you could have spent better.

Maybe it should simply be tested manually.

Asserts

In test automation, assertion is the validation step that determines whether the automated test case was successful or not. Every test should have at least one assert through which we confirm that the actual result matches the expected one.

Don't reinvent the wheel when it comes to assertions. There are a bunch of assertion libraries that can be used out of the box. Don't write your own verification methods unless it's really necessary.

Some general conventions that help keep asserts useful:

When asserting results in failure, the test execution is usually aborted. However, sometimes you do not want to abort the test but let it finish. Therefore, it is important to know about the difference between types of assertions.

Hard assert

Hard asserts refer to asserts that stop the test execution in case of an assertion error. In case you put an assertion in the middle of the test, this is where the test stops.

This type of assert should be used when you do not want the test to continue since the condition for further steps might not have been met. For example, you need to have a user created before continuing to the next screen/page. If the user is not created, there is no point in continuing with the tests. The test should be marked as fail.

Soft assert

Soft asserts refer to asserts that do not stop the test execution in case of an unexpected result. This type of assert is also useful because you can have multiple asserts throughout the test. The test execution will not stop if any of the asserts fail. When the test comes to an end, you will get the result on all failed asserts.

For example, you have a screen with a list of values that you want to check, but those values are not a precondition to any of the following steps. If any of the values are incorrect, it will not affect the following steps. You can simply add as many soft asserts as there are values on the screen and check that all of them match the expected result.

One difference compared to hard asserts is that, in some libraries, you have to collect all soft asserts at the end of the test. Otherwise, the asserts are not evaluated.

Example using pytest-check (Python):

import pytest_check as check


def test_example_one():
    # First assert
    check.is_in("a", "car")

    # Second assert
    check.is_equal("username", "username")

Example using softest (Python):

import softest


def test_example_two():
    # First assert
    self.soft_assert(self.assertIn, "a", "car")

    # Second assert
    self.soft_assert(self.assertEqual, "username", "username")

    # Collect all asserts
    self.assert_all()