Part of the Orange Group

Expert's Voice Quality Assurance
13 min read

Microservices testing strategy

Article written by:

Microservices testing strategy

Last 10 years was a period of the rapid popularity growth of distributed architectures. The promise behind this new architecture is an increased agility of an enterprise. Frequent releases of new features allow us to test hypotheses about customers’ needs. One of the key success factors is the confidence of each deployment. The confidence gained by a set of tests run each time a new software version is about to reach production. In contrast to monolithic systems, where the scope of tests is defined by complexity of the whole system, in microservices scope of test can be limited in order to save time, still not compromising the confidence. This new approach to architecture requires reviewing the tests structure as we know it. Today, we are going to go through the automated tests types that are useful for decent quality assurance of the microservice systems.

In this article you will learn

  • What to test in the microservice system
  • What are the test types needed for the each microservice part
  • How to implement them efficiently
  • How to balance your tests to achieve cost and time efficiency

All code examples in the article are written in Java, but presented approaches and patterns do work for other languages as well. All mentioned tools may be used with different languages and frameworks e.g. Kafka, WireMock.

Tests types in the microservice system

To  identify test types needed for testing a microservice is easier when you look at ingredients of  a typical application. Usually it is a reach domain surrounded by integration and coordination code.  Then each part of a system is composed of smaller units with a single responsibility when possible.

 

Let’s try to identify the test types with the criteria of the subject, the test is supposed to check.

Unit tests

The smallest pieces of software are tested by unit tests. The goal of this type of test is to check if each software unit behaves correctly in isolation from others. Most of professional developers write it on a daily basis. The tested unit isolation is built by mocking all dependencies.

The characteristics  of the unit tests are:

  • speed – single tests are executed below 1 ms on average
  • repeatability – unit tests do not depend on the environment
  • simplicity – tests are invoked on the narrow piece of code, thus it is easy to identify the failure cause

Integration tests

Looking at the diagram above, you can notice external systems that microservice application needs to connect and cooperate with:

  • database, e.g. PostgreSQL
  • message brokers, e.g. Apache Kafka
  • other services accessible via REST or SOAP

Usually, at the level of the code, developers extract dedicated pieces of the code responsible for communication with external systems and provide a convenient interface for communication between domain layer and integrated system.

  • for database integration we can use ORM and Spring Data
  • for message broker integration we can use Spring Cloud Streams for creating messaging gateways
  • for REST services that do not provide SDK we can use plain Java 11 HTTP Client

The subject for the integration tests are those classes/functions providing the communication with the external systems. Unit tests for any integration code are hardly beneficial since it assumes all dependencies to be mocked. In integration tests all dependencies should be accessed using real network protocols even with test instances of integrated systems.

When compared with unit tests, integration tests require much more time to execute and do not verify implementation of business logic.

Instead, the purpose of the integration test is to check:

  • whether the connection to database is established and requested queries are correctly executed
  • whether messaging gateways broadcast messages containing correct data
  • whether HTTP clients understand the server’s responses correctly

Component tests

Having all pieces working correctly in isolation and being sure that all external dependencies are accessible does not guarantee that a microservice as a whole behaves as expected. To verify it, complete use cases or processes enclosed in a single  component are tested. External dependencies can be substituted  by in-process mocks in order to speed up execution. Each test starts with a business feature request via exposed interface and expected results are verified using public interfaces or by checking mock pattern usage.

Beside checking whether the microservice itself is working properly component test also validate:

  • the microservice configuration
  • whether the microservice is correctly communicating with its dependencies/collaborators

End to end tests

The highest in the hierarchy type of tests are E2E tests. In that kind of the test, the whole microservice system is run, usually with the exact same setup as the production environment. Successful E2E test result means that the user’s journey was completed. End to end tests are the most versatile type of test, yet the cost of that versatility are complexity and fragility.

The purpose of the component tests is to check:

  • whether the whole microservice system is working properly
  • whether the system configuration is valid
  • whether the microservices are correctly communicating each other
  • whether the whole business processes can be successfully completed

Tests types distribution

Once we identify that different types of tests are used to ensure the code is “working” on different levels it is time to get down to practical considerations.

The experiences show that the difficulty of writing and maintaining component tests is significantly higher in comparison to unit and integration tests.

The experiences also show that the difficulty of writing and maintaining E2E tests is drastically higher in comparison to component tests.

Knowing that, we have to take some approach to distribute our test coverage across the test types. General principle could be phrased as “try to test implemented behavior with the simplest test you can write”. Wherever possible, try to test your functionality at the level of unit tests. Those are cheap, so you can afford testing all possible cases.

As mentioned, component tests are much harder to write and maintain so you want to keep their number rather low. Instead of testing every possible case, try to limit yourself to test major cases from business point of view – for instance one “happy path” and one “sad path”.

E2E tests on the other hand, are entirely different beast. There is a number of reasons why E2E test fail and that are extremely tricky to identify. Keep in mind that neither component nor integration tests will catch errors caused by:

  • occurrence of the HTTP timeouts
  • misconfiguration of the system
  • usage of incompatible versions of cooperating services

That means that for the sake of maintainability E2E tests should be limited to the crucial business processes and user journeys.

Tested system overview

In the next paragraphs of the article, we will walk through the implementation of the identified test types. To do that, let’s review a high-level snapshot of the tested system.

For the article purposes let’s consider a system “Piko” that enables the users to manage and publish their tourist attractions.

Piko system consists of the three components:

piko-admin –> piko-locations <– piko-maps

The user journey from creating the location to its publication consist of the following steps:

#DescriptionComponent
1The user creates the locationpiko-locations
2The user marks the location as awaiting the publicationpiko-locations
3The administrator accepts the locationpiko-admin
4The published location is sent to the application that serves public trafficpiko-maps

 

The majority of the communication between the applications is handled by the Kafka message broker.

For users authentication, Piko uses the JWT tokens issued by the AWS Cognito service.

You can checkout the code with the command:

git clone https://github.com/bluesoftcom/piko.git

Implementing unit tests

Likely, you have already been writing the unit tests. That subject is pretty popular and has been already covered in tons of other materials. Yet, let’s go quickly through the set of the practices that can make your unit tests even better.

Let’s consider following test case:

@Test
void testChangeLocationStatusToAwaitingForPublication() {

// given:

   final LoggedUser loggedUser = someLoggedUser().build();
   final Location location = someLocation().build();
   when(locationsRepository.find(any())).thenReturn(location);

// when:

   final DetailedJsonLocation updatedLocation = 
locationsService.updateLocationStatus(
         location.getId(),
         LocationStatus.AWAITING_PUBLICATION,
         loggedUser
);

// then:

assertThat(updatedLocation.getStatus()).isEqualTo(LocationStatus.AWAITING_PUBLICATION);
   verify(locationsGateway).sendPublicationRequestIssued(location.getId());
}

This single test already contains a number of tricks/approaches do make that test clear and maintainable.

Keep your test organized with given-when-then structure

Each test should consist of three parts:

  • given – this represents all assumption about state of the world before test
  • when – this specifies the tested behaviour
  • then – this describes the desired outcome

That structure  can be applied more broadly, not only to unit tests. It is valid for all automated tests we write. Moreover it is also a highly eligible form of acceptance criteria of user stories. Try to clearly identify each part of the test, especially given and when. Remember that ideal when section contains only tested invocation.

Use fixtures to make your test setup clear and readable

Test setup is usually the most complex and the longest section. As the given section grows and its readability decreases you should react immediately. Lack of means to easily setup the tests quickly drives developers to uncontrolled code duplication.

One of the best approaches to provide a clean and flexible way to setup the tests is the “fixtures” pattern. It requires only to extract a class that will contain factory methods for preconfigured builders ready to be used in the tests. Mentioned test uses someLocation() method to create Location object.

public class LocationFixtures {

   public static Location.LocationBuilder someLocation() {

      return Location.builder()

.status(LocationStatus.DRAFT)

.id(“e3c2e50c-7cb3-11ea-bc55-0242ac130003”)

.lat(51)

.lng(21)

.createdAt(Instant.parse(“2012-12-12T00:00:00Z”))

.owner(“trevornooah”)

.name(“Corn Flower Museum”);

}

}

Then, the returned builder can be customized in a way which helps you build any test case you need.

Use descriptive assertions library

Those readers who were working with JUnit 4, still can remember how limited was the built-in assertions library. In the JUnit 5 it got significantly better, yet the AssertJ library is eons ahead in that matter.

Consider following tests and the failure reports examples:

JUnit

@Test
void testJunit() {
   final List<String> input = List.of("John", "Brad", "Trevor");
   final List<String> expected = List.of("John", "Trevor");
assertIterableEquals(expected, input);
}
produces
iterable contents differ at index [1], expected: <Trevor> but was: <Brad>

AssertJ

@Test
void testAssertj() {
   final List<String> input = List.of("John", "Brad", "Trevor");
   assertThat(input).containsExactly("John", "Trevor");
}

produces
Expecting:
<["John", "Brad", "Trevor"]>
to contain exactly (and in same order):
<["John", "Trevor"]>
but some elements were not expected:
<["Brad"]>

 

Keep your tests from asserting on different things

That principle crosses the boundary of single unit test but becomes critical as your codebase grows. It is a practical implementation  of the desired test suite characteristic – “for one bug, one unit test should fail”.

Consider following method:

public DetailedJsonLocation updateLocation(String locationId, JsonLocation jsonLocation, LoggedUser user) {

log.info(“Updating location: {} by user: {} with data: {}”, locationId, user, jsonLocation);

final Location location = locationsRepository.find(locationId);

checkUserIsLocationOwner(location, user);

 

 final LocationStatus previousStatus = location.getStatus();

final Location updatedLocation = transactionOperations.execute(status ->

{

final Location foundLocation = locationsRepository.find(locationId);

      // … mapping code

      return foundLocation;

});

   if (previousStatus == AWAITING_PUBLICATION) {

locationsGateway.sendPublicationRequestWithdrawn(updatedLocation.getId());

}

   return toDetailedJsonLocation(updatedLocation);

}

This method is covered by four tests:

Test methodAsserted scope
testUpdateLocationAwaitingForPublication
Behavior unique to updating location that awaits for publication
testUpdateDraftLocation
Behavior unique to updating location that is a draft
testUpdateLocation
Behavior common to all location updates
testUpdateLocationOwnerByOtherUser
Behavior unique to update of the location owner by somebody else

With that approach, if there is a bug in the Location updating process, I will end up with only one failed test instead of four. This eases the analysis of the root cause.

Implementing integration tests

As mentioned in the previous section, integration tests are meant to test the behavior of the classes used as adapters to external systems. Make sure you have accurate test coverage for all your integrations. In this section we will review an approach to test the most common integrations implemented in the microservices.

Testing database integration

Database is for sure the easiest integration to test, especially when you use Spring-Data, that gives you generated repositories implementations out of the box. Due to the recent popularity growth of Docker, developers can use real databases for testing, instead of in-memory databases like H2. That approach is implemented in piko and you can find  database configuration in the docker-compose.yml and application.yml.

When your application invokes some complicated queries, or particularly dynamic ones, then you should write integration tests for that repository.  For instance piko-locations invokes a query LocationsRepository#findByOwner that uses a parameter and a sorting.

I can pretty easily test that query (and the sorting as well) with reusing fixtures I have previously created for the unit tests.

@Test
void testFindByOwner() {
   // given:
   final String owner = "owner:" + UUID.randomUUID().toString();
   final Location location1 = locationsRepository.save(
      someLocation()
      .id(UUID.randomUUID().toString())
      .createdAt(Instant.parse("2020-03-03T12:00:00Z"))
      .owner(owner)
      .build()
    );
    final Location location2 = locationsRepository.save(
        someLocation()
        .id(UUID.randomUUID().toString())
        .createdAt(Instant.parse("2020-03-03T13:00:00Z"))
        .owner(owner)
        .build()
    );
    final Location location3 = locationsRepository.save(
        someLocation()
        .id(UUID.randomUUID().toString())
        .owner("other-user")
        .build()
    );

   // when:
   final List<Location> locations = locationsRepository.findByOwner(owner, Sort.by("createdAt"));


// then:

   assertThat(locations).containsExactly(location1, location2);
}

Useful remark: If your test is @Transactional, you should call EntityManager.flush() before and after the test invocation. Hibernate does not guarantee that all database instructions are sent to database immediately, but rather during the transaction flush.

That behavior sometimes leads to false-positive results where the tests pass only because Hibernate did not execute the database statements, but cached them in the memory.

Testing HTTP clients & mocking servers

Communication over the network with HTTP protocol is a huge part of microservices integration. It is no surprise that for thorough testing, developers need robust tools to mock external services. The most popular tool for that purpose is WireMock library which makes stubbing servers really easy.

While the possibilities of WireMock use cases are impressive, in “Piko” we limit ourselves to stub one AWS Cognito endpoint for fetching user details.

Considering the implementation, it is very convenient to prepare a  dedicated class for managing WireMock server and the stubs. In the piko-admin application, you can find CognitoApi class written for this purpose.

@Component
@Slf4j
public class CognitoApi implements Closeable {
   private final WireMockServer server;
   private final CognitoProperties cognitoProperties;
    
@SneakyThrows

public void mockSuccessfulAdminGetUser(String username) {

final String json = “{…}”;

server.stubFor(

post(urlPathEqualTo(cognitoProperties.getEndpoint().getPath()))

.andMatching(request ->

request.header(“X-Amz-Target”).firstValue()

.equals(“AWSCognitoIdentityProviderService.AdminGetUser”)

? MatchResult.exactMatch()

: MatchResult.noMatch()

)

.willReturn(

aResponse()

.withStatus(200)

.withBody(json)

)

);

}

}

Stubs prepared in that manner can be easily used in the integration test, with just one line of code:

@Test
void testSendLocationPublishedNotification() throws Exception {
   // given:
   final String username = "johndoe-" + UUID.randomUUID().toString();
   final String recipient = String.format("<%s@email.test>", username);
   cognitoApi.mockSuccessfulAdminGetUser(username);
   // ...
}

Testing integration with SMTP server

Transaction e-mail sending is one of the common business requirements in the applications. While developers have been using local SMTP servers like Papercut for pretty long time, it is still not that common to use similar concept in the automated tests.

On the Docker Hub there is a number of local SMTP servers that expose an API for programmatic use. One of those is schickling/mailcatcher. After sending an e-mail in the test, all you need to do is to call the MailCatcher API and do your assertions.

@Test
void testSendLocationPublishedNotification()

throws

Exception {
   // given:   
   final String username = "johndoe-" + UUID.randomUUID().toString();
   final String recipient = String.format("<%s@email.test>", username);
   cognitoApi.mockSuccessfulAdminGetUser(username);

   final DetailedJsonLocation location =

;

   // when:

adminLocationsNotifications.sendLocationPublishedEmail(location);

   // then:

 final List<MessageSummary> messages = mailCatcherClient.listMessages();

 final MessageSummary foundMessage = findByRecipient(messages, recipient);

assertThat(foundMessage).isNotNull();

 

final MailcatcherMessage message =

mailCatcherClient.fetchMessage(foundMessage.getId());

 assertThat(message.getRecipients()).contains(recipient);

assertThat(message.getSender()).isEqualTo(String.format(“<%s>”,

notificationsProperties.getFromAddress()));

assertThat(message.getSubject()).isEqualTo(“Your location Corn Flower Museum was published”);

 assertThat(message.getSource())

.contains(username)

.contains(location.getLocation().getName());

}

After fetching captured messages with MailcatcherClient I can check whether e-mail was sent to and from correct addresses, as well as checking if the email content contains needed information.

Testing message broker integration

The second common way of microservices integration is use of some message broker. In case of Piko asynchronous messages are sent to and recieved from Apache Kafka with use of  Spring Cloud Streams. Beside simplifying message producer and consumer implementation it also exposes a convenient API for testing the messaging.

In the unit test section, you may have noticed the assertions on calling the LocationsGateway that is responsible for sending the messages to the Kafka topic.

Let’s consider a test that verifies that LocationsGateway sends correct messages:

@BeforeEach
void setUp() {
locationEventsMessageCollector.dumpMessages();
}

@Test
void testSendPublicationRequestWithdrawn() {
   // given:
 final Location location = locationsRepository.save(
someLocation()
.id(UUID.randomUUID().toString())
.build()
);

   // when:
locationsGateway.sendPublicationRequestWithdrawn(location.getId());

   // then:
final LocationMessage message =

locationEventsMessageCollector.captureLastLocationMessage();
assertThat(message).isEqualTo(
LocationMessage.builder()
.id(location.getId())
.status(location.getStatus())
.event(LocationEventType.PUBLICATION_REQUEST_WITHDRAWN)
.lat(location.getLat())
.lng(location.getLng())
.createdAt(location.getCreatedAt())
.owner(location.getOwner())
.name(location.getName())
.build()
);
}

The test uses LocationEventsMessageCollector that is a very thin class for more convenient working with the messages. It uses Spring Cloud Streams to capture sent message for making the assertions on it:

public LocationMessage captureLastLocationMessage() {
final Message<String> lastMessage = (Message<String>) messageCollector
.forChannel(locationEventsBindings.outboundChannel())
.poll();
Objects.requireNonNull(lastMessage, “Could not capture last message for: “ + locationEventsBindings.OUT + ” binding”);
      return objectMapper.readValue(lastMessage.getPayload(),

LocationMessage.class);
}

Implementing component tests

Component tests are the last type of test you will find in the particular microservice code. Their scope is the greatest and they ensure that the whole business process is correctly conducted inside in the application.

When writing component tests, you should pay particular attention to writing accurate assertions. While component tests have broad scope, you should try to keep your assertions shallow, asserting only that the whole operation succeeded and the necessary functions were invoked. There is no need to check behaviour of each function and integration as it is verified by unit and integration tests.

Testing processes invoked with HTTP

The majority of our business processes are triggered by a call to a REST endpoint, thus it is no surprise that Spring Framework has a convenient tool to test HTTP endpoints – MockMvc. Let’s look at how it is used in the most important component test in Piko system – verifying location publication process.

@Test
void testPublishLocation() throws Exception {

// given:

   final LoggedUser loggedUser = someLoggedUser("user-" + UUID.randomUUID().toString())
      .build();
   final Location location = locationsRepository.save(
       someLocation()
       .id(UUID.randomUUID().toString())
       .owner(loggedUser.getUsername())
       .build()
    );
    cognitoApi.mockSuccessfulAdminGetUser(loggedUser.getUsername());

// when & then:

   mvc.perform(put("/locations/{locationId}/status", location.getId())
         .contentType(MediaType.APPLICATION_JSON)
         .content("{ \"status\": \"PUBLISHED\" }")
         .with(authenticatedUser(loggedUser)))
     .andDo(print())
     .andExpect(status().isOk())
  .andExpect(jsonPath("$.status").value(LocationStatus.PUBLISHED.name()));

   // and:
   final Location foundLocation = 
locationsRepository.tryFind(location.getId());
    assertThat(foundLocation).isNull();
    
    final LocationMessage locationMessage = 
locationEventsMessageCollector.captureLastLocationMessage();
    assertThat(locationMessage.getId()).isEqualTo(location.getId());
assertThat(locationMessage.getEvent()).isEqualTo(LocationEventType.LOCATION_PUBLISHED);

    final String expectedRecipient = String.format("<%s>", 
loggedUser.getEmail());
    final List<MailCatcherClient.MessageSummary> capturedEmails = 
mailCatcherClient.listMessages();
    assertThat(capturedEmails).anySatisfy(capturedEmail ->
                                          
assertThat(capturedEmail.getRecipients()).contains(expectedRecipient)
                                         );
}

Even if ‘then’ part of the test is pretty long, still you can notice the mentioned “shallow assertions” principle. The component behaviour is tested by observing business process results. The expected results are:

  1. REST API responds with operation success
  2. Location was deleted from the piko-admin database
  3. The application has broadcasted LOCATION_PUBLISHED event
  4. There was an email sent to the user

That approach makes that test immune to likely small and frequent changes that we do not need to check here. All of those are covered in different tests.

Change exampleVerification point
Change in the endpoint response formatUnit test and another component test
Change in the Kafka message formatIntegration tests
Change in the email logicIntegration test

Testing processes invoked with message broker

That may not be particularly intuitive, but the listeners that react to incoming messages usually invoke the business process and are perfect test subject for component testing. Like in the HTTP endpoints testing, we want to test whole interface layer, business logic and the integrations.

Similarly to MockMvc for HTTP endpoints, Spring Cloud Streams expose convenient API to put a message into the channel and be routed to our application. Let’s look at the test that checks the stage of the location publication process in the piko-locations.

@Test
void testLocationPublishedEvent() throws Exception {
   // given:
   final Location location = locationsRepository.save(
      someLocation()
      .id(UUID.randomUUID().toString())
      .status(LocationStatus.AWAITING_PUBLICATION)
      .lastPublishedAt(null)
      .build()
    );
    final LocationMessage message = toLocationMessage(location, 
LocationEventType.LOCATION_PUBLISHED);


// when:

   locationEventsBindings.inboundChannel().send(

new

GenericMessage<>(
          objectMapper.writeValueAsString(message)
        )
    );

       // then:

final Location publishedLocation =

locationsRepository.find(location.getId());

assertThat(publishedLocation.getStatus()).isEqualTo(LocationStatus.PUBLISHED);

assertThat(publishedLocation.getLastPublishedAt()).isNotNull();

}

That test case is simpler, but you can spot the shallow assertions approach as well. Putting a message to the channel with inboundChannel().send(…), is fully synchronous so our test environment is safe and repeatable.

Organizing tests execution in Maven

Apache Maven uses two life cycle phases that are meant for tests execution: test and verify. The difference between them lies in the plugins that are executed for that stages by default.

In the stage test, plugin maven-surefire-plugin is executed, that runs the test cases having the Test suffix.

In the stage verify, plugin maven-failsafe-plugin is executed, that runs the test cases having the IT suffix.

Technical differences between them are covered in the Maven FAQ.

For project build organization, it is practical to distinguish tests basing on the “IoC container requirement” criteria rather than designing different executions for each test type because it will only make your build longer.

The easiest and the most convenient setup is to suffix your unit tests with Test and integration and component tests with IT. Then, default configurations will make maven-surefire-plugin immediately pick up unit tests, and make maven-failsafe-plugin pick up integration and component tests.

Summary

In the article we have went through the test types useful for decent assurance that the developed applications are working. For each type of the test, we have reviewed useful techniques, patterns, principles and tools to implement them.

At that high level of component tests, you can notice how nicely the measures we have taken play with each other, allowing to reuse the testing logic across the tests.

Used tool or patternUnit testsIntegration testsComponent tests
Fixturesyesyesyes
Server stubsnoyesyes
Message collectorsnoyesyes

All types of the reviewed tests overlap each other a little, giving us the completeness of the code verification process inside of the one application.

Setting up & implementing E2E tests

For further reading, please leave your email.

What Can We Do For Your Business?

Contact Us!

You might also be interested in