Drawing the landscape
Before focusing on any functional test automation techniques and tools it’s always a good idea to look at the bigger picture first. Here are some points that are worth highlighting.
Follow the right patterns
It is commonly known that one should follow the test pyramid concept (as opposed to the testing ice cream cone) when designing automated tests and continuous integration pipelines.
The reasons behind this are simple – tests from the lower levels of the pyramid execute faster and are usually cheaper to implement. Also – the earlier we find bugs – the better. This doesn’t mean that automating higher level tests (validating all of the system’s components) is not as important.
Keep it balanced
Deciding on what tests should be automated and to what extent is always a crucial part of the testing strategy. Following the test pyramid is a good starting point, but it’s also important not to underestimate the value of other test techniques, both functional (e.g. exploratory) and non-functional (code analysis, performance, resilience, security, …). When automating functional tests, as a rule of thumb, user interface tests should be kept to a bare minimum (covering critical business paths) and whenever possible – testing via APIs should be preferred.
Choosing your tools
The tooling choice for unit and Integration tests is strongly related to the technology in which the component is being developed. Tools such as JUnit, Mockito, Jest, etc. help developers achieve test coverage at the early build stage.
It is at the higher levels of the test pyramid that we can start and think of common testing frameworks. Consumer Driven Contracts supported by tools such as Spring Cloud Contract and PACT are gaining popularity as means of early non-functional API contract validation.
But what about validating logic critical to our business in an integrated environment, where multiple components and services are working together? How can we organize our test scenarios and make them executable with a reasonable effort?
This is where an End-To-End Testing Framework might come in handy!
There’s a lot of commercial frameworks out there, but we’ve decided to embrace the open source and use well established solutions supported by large communities instead.
The reasons behind our choices are:
- We like it when things have matured enough to just work
- We like to benefit from the vast integration as well as customization possibilities of the open-source frameworks
- We have successfully addressed both internal and commercial needs for acceptance testing using the toolset described in detail below.
End-To-End Testing Framework
Our framework aims to support automated acceptance tests and is designed to accommodate both API and GUI testing.
It’s implemented in JAVA and if needed can easily be extended to connect to Databases and Messaging Queues.
Scenarios (created together with business owners / analysts) are written in Cucumber (Gherkin syntax) and are grouped into features.
Tests are executed with TestNG+CubumberJVM and Maven based on scenario tags (specifying severity, components etc). Tests can be run in parallel; test data can be set up using TestNG or Cucumber hooks.
What’s under the hood?
- Cucumber (Gherkin) –> scenarios and features
- RestAssured -> REST API step definitions,
- JAX-WS -> SOAP API step definitions,
- Selenium -> GUI step definitions
- Zalenium -> running GUI scenarios on different browsers, Video Recording Dashboard
- Allure -> common format HTML tests reports
- Support Libraries:
- Swagger Codegen -> generating class model from Swagger
- Apache CXF -> generating class model from WSDL
- Java Faker -> fake data generator
- The Waiter -> selenium waits wrapper
- TestNG+CucumberJVM, Maven -> running the tests, before/after hooks
- Jenkins (or similar) -> CI/CD
Benefits and Features
Ready to deploy
The “boilerplate” is ready to be deployed as a Docker Container on-premise or in cloud, for example as an Openshift or K8s project (which is preferred in our out-of-the box approach to achieve the most efficient execution environment).
Executable test specifications
Gherkin provides syntax for human-friendly test specifications which can be implemented and run as code. Features, Scenarios and Steps make up the business facing test specification while Step Definitions together with support code carry out the testing tasks.
Here are two examples of executable test scenarios.
- API (basic Scenario):
- GUI (using a Scenario Outline):
please note that driving the scenario with data is as easy as adding new rows to the “Examples” table!
Allure delivers good looking HTML reports and out of the box integrations for RestAssured, Cucumber and Jenkins.
We’ve further customized our reports to provide:
- References to Video Recordings of GUI tests executions Dashboard
- Links to Ticket Management System (e.g. JIRA)
When needed, execution results can also be reported back to JIRA (through custom “@after” hooks).
Here are some best practices to follow when implementing and executing the scenarios:
- Scenarios should be prepared together with business owners / analysts and written in Cucumber (Gherkin) while taking benefit of its features such as:
- Scenario Outlines and Examples
- Data tables
- Tags (e.g. “@gui”, “@api”, “@blocker”, etc.)
- The scenarios must be independent of each other, i.e. each of them should be able to be run separately (test data should not “leak” to other scenarios)
- The scenarios should not contain unnecessary technical details and should be easy to understand
- Scenarios should be grouped in *.feature files according to the tested business domain (not necessarily per user stories)
- When implementing GUI step definitions:
- Page Object Pattern is recommended
- “Sleeps” are forbidden
- Id, css selectors are preferred over xpath
- When implementing API step definitions:
- Common properties should be reused with RestAssured RequestSpecifications
- Filters should be used for logging
- Whenever possible, DTOs should be created from API definitions (.wsdl, .swagger)
- Tests should be run on dedicated environments / with dedicated test data (to avoid interference with manual tests)
- Automatically generated data should be prefixed or suffixed (so that it can be separated from manual test data)
- Systems outside integration scope (which we don’t control in our environment) should be mocked
Why not use Postman and Cypress.io instead?
Postman (together with Newman) and Cypress.io are both great (semi-commercial) tools, though somewhat limited. Working in large enterprise environments, the requirements for automated tests are sometimes simply more demanding. Taking API tests as an example – we actually do use Postman as a development (and test development) aid, but RestAssured gives you more control both over organizing your tests and executing them.
A good example of this is a test scenario we’ve recently had to cover, which required taking control over the SSL session. While Postman keeps the SSL session open between requests (in a non- configurable way), RestAssured lets you choose whether to keep the SSL connection alive or close it after subsequent requests (via its connection-config methods).
Cypress.io’s popularity is growing, but when full browser control (working with tabs and iframes) and cross browser testing are required – Selenium, despite its disadvantages, still has more to offer.
(please note that the framework components can be modified and adjusted depending on project requirements)
Keeping it together
Here are some more general tips on how to make the best out of your automated end-to-end tests.
Integrate with your CI/CD pipelines
Automated acceptance tests should be integrated into CI/CD pipelines, as a next step after automatically deploying your applications into dedicated test environments (created for example using OpenShift project templates). Failed acceptance tests mean failed deployment!
Make your automated tests part of the development process
Automated test scenarios should be written alongside product development. Writing tests parallel to developing your features (as opposed to tests as an afterthought) will help you shorten the feedback loop and deliver a better product.
Work with stakeholders
Inviting stakeholders (business analysts, manual testers) to work on BDD test scenarios will let your teams communicate better and might even help identify bugs / gaps before they make it into code. Ubiquitous domain language will help avoid misunderstandings.
Don’t repeat yourself (and your tests)
Duplicating business logic coverage across different test levels should be avoided. End-to-end tests should only cover the most important user journeys through your applications.
Act on feedback
Bugs identified on any of the testing steps should be covered by tests at the lowest possible level of the test pyramid.
Mind the execution time
It’s a good idea to agree on a time budget regarding the e2e test suite execution, for example 15 minutes. Remember that your automated tests are a “safety net” and should not make the deployment process uncomfortably long.
How can we help?
Having in mind multiple challenges you can encounter trying to follow your path, even though it is given as one of the best practices, from our experience it is beneficial to start with a proven and reliable partner who has already built appropriate knowledge base of possible implementation dead ends. BlueSoft can be your help regardless if you want to start from scratch or simply extend your existing framework. Having pre-configured implementation patterns, we can immediately start with an out-of-the box solution staying open for any adjustments in the future to create a bespoke solution.