<Architect for Developers> series.
Innovation in Technology is when flying in business class should not be slower than flying in an economy class – quality is not a price of speed.
Haste is good only for catching fleas.
Part 1 – “As-Is”
Context
For the last few years, I noticed a trend in Web Sites conveyed degradation in quality of User Experience. For instance, I ought to click on the same control in some Web Page several times to get the desired effect while other Sites work as expected, I am clicking on one image in the gallery but another image pops up, sometimes new Sites open when I’ve not selected them in the page… You can say there may be many different reasons for this in different cases. Well, yes, this happened before, but not at such massive quantity.
I’ve decided to conduct a small investigation: first, I’ve checked the companies whose Sites I found disturbing – all the companies were modern and claiming using innovative technologies. Then, I assumed that these companies use Microservices and looked up for the Quality Assurance (QA) methods for Microservices in the DevOps delivery model. That was nothing more than an intuitive hunch, but I’ve found the information that has alarmed me.
The majority of sources, which I found about Microservice testing, repeated the same set of test types with no variations. This was unexpected. Below is this repeating list of tests:
- Unit-testing
- Integration/communication Testing between Microservices
- Component Testing – testing of the Microservice as a whole
- Contract Testing
- End-to-end Testing.
There are two the most confusing thoughts about this uncover:
- Why only mentioned types are included and others – very important ones – are not? Particularly,
- Load/stress tests
- Negative/failure/robustness tests
- Regression tests for the Microservice-based application
- Scalability test
- Security and compliance tests.
- If this is a sort of a standard, why there are so many different interpretations of what constitutes each type?
I believe that the answer to a) roots in the article “Testing Strategies in a Microservice Architecture”, authored by Martin Fowler, where he articulated those 5 testing types. I think that this list has played a dual role: 1) it defines the testing types that should be applied; 2) it defines a filter that eliminates any other testing type regardless of its value and necessity.
“Golden Standard”
Mentioned Fowler’s article has a lot of very useful recommendations. It talks about integration between Microservices: “If one service has another service as a collaborator, some logic is needed to communicate with the external service.” However, this is in the serious contradiction to the Microservice’s characteristics that Mr Fowler articulated in the Microservice foundational article that almost all known to me developers in different companies follow. The latter has led to the popular perception that the more a Microservice is isolated at run-time, the better and that the Microservice has to minimise (or eliminate) its “dependencies” on other Microservices. It’s simple and very convenient for development and testing, but this prevents the creation of Microservice-based applications. If the quoted article has been read by testers, how it has happened that developers obstinately ignore integration and interdependencies between Microservices? Anyway, as a result, many right statements made by Mr Fowler regarding testing ‘hang in the air’.
Before talking about Microservice QA and testing further, let me put down a few comments about the Fowler’s “golden standard”, which many have accepted without challenge (or comprehension?).
Overall notes
- No one type of specified tests addresses explicit testing for “rainy day” situations though Integration Testing has a light hint at it
- Functionality (the core purpose of Microservices) is tested only indirectly, implicitly: “Almost all of the service logic resides in a domain model representing the business domain. Of these objects, services coordinate across multiple domain activities”
- Event-based inter-Microservice communication is not mentioned and, therefore, not tested
- There is no clarity about “external” Microservices. I think that since each Microservice is independent of others and can be created by different team/developer, all Microservices are external to each other, aren’t they?
- Recommendations mix Microservices design and testing with organisational and ownership aspects. For example, “The testing concerns for external services can be different to those for services under your team’s control since less guarantees can be made about the interface and availability of an external team’s services”. My experience croons me that, ownership of the Microservice can be moved to another team at any moment by a simple local reorganisation. This means that founding testing on who controls the test-subject at particular moment is a childish mistake. DevOps teams are still under corporate management and should not imagine that they will exist forever. That is, all Microservices should be tested with no dependency on who owns them at the moment and with no assumptions they could be adjusted later on (except for new requirements).
Unit-testing
The “Golden Standard” describes the Unit-testing very accurately with many details. I am pleased to outline a simple Fowler’s statement that can be very helpful to DevOps and can cool their rush to delivery. He has said: “Unit testing alone doesn’t provide guarantees about the behaviour of the system”. In other words, Unit-testing is not enough for releasing a Microservice product.
Since practically no Microservices alone realise a system (they usually provide a fine-grained functionality), Microservices should be tested within the system though it can be deployed individually. Testing in isolation is only a special technique that does not substitute joint testing of several Microservices that are logically linked together.
Integration/communication Testing
I support Fowler’s statement: “Whilst tests that integrate components or modules can be written at any granularity, in microservice architectures they are typically used to verify interactions between layers of integration code and the external components to which they are integrating.” The “layers of integration code” are formed by an internal code of Microservice while “external components” are other Microservices.
The latter are accessible via network and, in the opinion of Mr Fowler, this constitutes a problem for Microservice testers. To my understanding, the problem is that a tester or a team cannot control the network and depends on it. As I recall, such situation was never a problem for application development in the past. Does he concern that modern developers have lost qualification? He recommends: “To mitigate this problem, write only a handful of integration tests to provide fast feedback when needed and provide additional coverage with unit tests and contract tests to comprehensively validate each side of the integration boundary. It may also make sense to separate integration tests in the CI build pipeline so that external [M.P. regarding the team] outages don’t block development.” In your opinion, dear reader, can “fast feedback” compensate an untested product in business eyes? If such practice is exercised, this is the alarm for the company!
All developers have different experiences and skills. If they are allowed to write just a minor (“handful”) integration tests, I have all rights to consider that the common denominator will be “no tests at all” or integration tests will be emulated by accessibility[1] tests. I hope you’ve anticipated my conclusion already – with such an approach we will get no or barely tested Microservice-based applications. Do we need them? May we allow them to be in productions? If you are looking for trouble, you, probably, say “yes”.
Component Testing
Mr Fowler defines: “In a microservice architecture, the components are the services themselves” and “A component test limits the scope of the exercised software to a portion of the system under test, manipulating the system through internal code interfaces and using test doubles to isolate the code under test from other components”. Unfortunately, this scoping is oversimplified (and I suspect that this is done deliberately to reduce the testing time). Apparently, Microservice testing comprises two testing realms:
- Inside-out – for testing Microservice’s internal code (layered or not)
- Outside-in – for testing external invocations of Microservice – its interactions with others
An absence of the outside-in testing notion in the Fowler’s 5 test types explains an absent of load/stress and scalability testing as well as some security testing.
The recommendations from Mr Fowler here are about using ”test doubles” such as mocks, stubs, etc. Also, he points to disturbances for testing caused by the network if it is involved, for instance, for communication with data-stores. He advises: “By instantiating the full microservice in-memory using in-memory test doubles and datastores it is possible to write component tests that do not touch the network whatsoever. This can lead to faster test execution times and minimises the number of moving parts reducing build complexity.” I am not sure if this is a good advice. It is known for decades that testing should be conducted in an environment that is as close to the production environment as possible. If Microservices “do not touch the network whatsoever “ in testing while they will be deployed on the networks, it is almost guaranteed that Microservices will not be ready to handle network issues. In other words, they will be not ready for real-world deployment. Thus, a “faster test execution times” leads to unrealistic test results, i.e. to the total waste of time despite such testing is convenient for the development team.
Contract Testing
A contract between Microservice-consumer and Microservice-provider is the area where Microservices cardinally differ from SOA Services. The latter define Service Description and Service Contracts as preliminary off-line means though they can be automated. The purpose of the Description is to provide enough information to the consumer to make a decision about using the provider; the purpose of Contract is a legal and programmatic agreement about what provider’s functionality and interfaces are available to the consumer. A provider offers the service while a consumer can use it; if the consumer is not satisfied with the offer, it does not use the service. That’s simple. This is normal business practice, which Microservices still do not comprehend and exercise.
Mr Fowler talks about external dependencies that have to be tested while he has recommended avoiding dependencies for Microservices by all means. In his description, “Whenever some consumer couples to the interface of a component to make use of its behaviour, a contract is formed between them”. Well, why the consumer trusts the component so much it “couples to the interface of a component”? The contract creation, in Mr Fowler’s words, sounds like a Microservice couples with the interface first and only then “thinks” about what it just did. Shouldn’t this be done in another way around?
He continues, “This contract consists of expectations of input and output data structures, side effects and performance and concurrency characteristics”. Well, this is very confusing because it does not work as described: what expectations can be after Microservice has coupled already (or fired an event notification for integration purposes)? The data structure should be formed by the moment of coupling already, especially for RESTful interfaces – no expectations may be here. How on the Earth a Microservice could potentially know about “performance and concurrency characteristics”, which are not parts of the interface? These two ‘contract’s elements’ are either fantasies or the consumer-provider contract is established before the Microservice “couples to the interface”. In any case, the given representation of the contract is erroneous.
I do curious how “Each consumer of the component forms a different contract based on its requirements” while the Microservice-provider (the external component) has fixed interfaces regardless of what Microservice-consumer wishes. This is a totally misleading declaration of Mr Fowler. Moreover, this is another upside-down assertion: “Integration contract tests provide a mechanism to explicitly verify that a component meets a contract.“ Apparently, it is a very Microservice-provider defines its interfaces (as well as their SLAs), i.e. it always meets its own contract. The subject of the test is whether the Microservice-consumer meets the contract of the Microservice-provider, i.e. “the inputs and outputs of service calls contain required attributes and that response latency and throughput are within acceptable limits”. Since the consumer and provider Microservices can be built and ruled by different teams, the Microservice-consumer has only two choices: a) accept given contract; b) use another Microservice; a question about adjusting Microservice-provider for each Microservice-consumer is out of considerations. Microservice-providers are created to work, not to be constantly modified with the appearance of each new or changed Microservice-consumer. “While a consumer is always right, not every consumer is right for the service”.
End-to-End Testing
End-to-end testing for the Microservice, according to the old IT tradition, is usually designed for the ‘sunny day’ cases. Actually, this relates to all types of tests. Developers de facto appear as super-optimistic people who are unaware of real execution contexts… until the problems happen.
According to Mr Fowler, “An end-to-end test verifies that a system meets external requirements and achieves its goals, testing the entire system, from end to end.” I am intrigued why requirements are “external” and what requirements could be internal, but this is a ‘small fish’.
Yes, end-to-end testing is the most important and the most difficult test type as it always was. For Microservices, it is exceptionally difficult because other ‘recommended’ tests do not really address the core business task set for the Microservice-based application.
I am amazed by the comments that Mr Fowler makes about end-to-end testing when trading it for the pace of delivery and convenience for developers:
- “…comprehensively testing business requirements at this level is wasteful, especially given the expense of end-to-end tests in time and maintenance”. Does this mean that a realisation of business requirements should be carelessly tested?
- “One strategy that works well in keeping an end-to-end test suite small is to apply a time budget, an amount of time the team is happy to wait for the test suite to run.” So, if the business task is such that the testing of Microservice requires longer testing time than the “the team is happy to wait for the test suite to run”, the team may circumvent the end-to-end test as a whole or “the least valuable tests are deleted to keep within the allotted time. The time budget should be of the order of minutes, not hours”. It is muggy what “the allotted time” is and what dictates to allocate it before the Microservice gets deployed. If we are assumed to run an end-to-end test, it must be a full end-to-end test; otherwise, it is simply not an end-to-end test but speculation. The end-to-end test should run as long as needed and its duration depends on the complexity of the task under testing, not on the patience of the team, especially considering known negative testing and possible asynchronous interactions[2] between Microservices; a chronometer may have no impact on the test duration.
- “…some services may suffer from reliability problems that cause end-to-end tests to fail for reasons outside of the team’s control“. Well, business stakeholders do not really care if the test failure is attributed to the “reasons outside of the team’s control”; they need to know that the Microservice has been tested end-to-end and the team has done everything in its power (including negotiations with the third party) to eliminate external reasons of failure.
Mr Fowler ends-up with astonishing conclusions: “Write as few end-to-end tests as possible” and “Given that a high level of confidence can be achieved through lower levels of testing, the role of end-to-end tests is to make sure everything ties together and there are no high level disagreements between the Microservices”. What does mean “a high level of confidence can be achieved through lower levels of testing”? If a development team is confident in this, it does not mean that anybody else is confident. Who is responsible or pay for the case where this “high level of confidence” does not help and the application falls apart in the consumer’s hands?
Overall, the quoted statement is an oxymoron! Writing as few end-to-end tests as possible is the same as giving an assurance that “everything ties” and fits together as less as possible. Are we in the business of providing reliable SW products or we are in the business of team-games to please developers?
Part 2 – “To-Be”
Inside-out & Outside-In Testing Realms
The diagram below illustrates two testing realms.

I’ve listed only mandatory test types that should be applied to any products released to the consumers. If Microservices are products, according to Mr Fowler, all these test types are applicable to them. If the DevOps have difficulty to perform them, the DevOps practice has to be refined. Nobody needs products of poor quality quickly delivered to the market. It is business wisdom: releasing low-quality products to the demanding consumers is the same as spitting into eternity and against the wind.
In this section, I dare to propose descriptions of omitted tests.
Load & Stress Testing
Load and Stress testing belongs to the Outside-in testing realm. Since the time of “webinisation of businesses”, it has become a norm that testing consumes more time than the development of code. It is that long because applied to the applications as a whole and propagates to the major application components, which in our cases are Microservices and related data stores.
Load and Stress testing requires setting a configurable engine that generates and sends requests to the API of the Microservice under the test or fires event notifications into the event-bus (for push events) or event storage (for poll events). In the Web tool market, such Load Test engines are available. As of event-based load and stress tests, the engines ought to be found or built. The duration of these tests depends on the durability of the Microservice and the configured frequency of requests. Special attention should be put on simulating concurrent requests.
For an individual Microservice, the Load and Stress may differ by the goals. The Load test aims to find how many concurrent external requests a Microservice can reliably handle as well as how many external requests a Microservice can reliably handle for the specifies period of time. The Stress test is for finding the number of total requests a Microservice can handle until it breaks.
In these tests, a Microservice should be set in the context of all other (external) Microservices and data stores it interacts with. It is possible that one or some of these external components fail, but this means that the Microservice can still pass the test. Testing of the failed component is a subject of another test.
This test allows us to understand the scale or invocation, in which a given Microservice may be used without any modifications. If the Load and Stress tests cannot be run by the team, it is not a reason to skip it – there should be a special team capable of writing Outside-in tests across Microservices.
Negative-Failure-Robustness Testing
Such tests belong to both Inside-out and Outside-in testing realms. The tests are conducted by simulating unexpected inputs, sporadic failures of any and all layers of internal code and communication network channels, including discrepancies in the data transformation/conversion or mapping. This also relates to the failure of all and any related Gateways.
The major criterion for successful testing in such conditions is the ability of the Microservice either to survive and continue providing some functionality to its consumers or fail gracefully, i.e. slow enough to report about its failure by logging or sending related notes.
Just monitoring of the Microservice behaviour, which many rely on when implementing recommended “quick failure”, is not testing though it is also known as “testing in production”. If a failing Microservice can report its problem and die, another Microservice that needs to interact with the failed one can read/listen to this report and change its behaviour accordingly. For example, it can engage another Microservice and continue application execution. This is a regular requirement for SOA Services, which is unknown in object-oriented practice.
I witnessed cases where special people were hired to compose the negative cases for product testing. Alternatively, a practice of public Beta-testing is also known. However, it is rare in Web application testing nowadays. We have to rely on dedicated internal negative testing of Microservice products.
Negative testing is a known ‘pain in the back’ for developers. The negative cases are not specified in the requirements but have to be derived from them. Murphy’s Law states: “Anything that can go wrong will go wrong” – this should be the motivation and basis for defining negative tests. There may be dozens of such tests alone and in combinations. Even if each of them may take no longer than a minute, the full testing can last up to more than an hour, i.e. automation here is highly recommended, if possible.
If the Negative tests cannot be run by the team, it is not a reason to skip it – there should be a special team capable of writing Outside-in tests across Microservices.
Regression Testing
Regression testing belongs to the Outside-in test realm. For Microservices, regression tests are widely neglected on the ground of an independently deployed Microservice isolates its changes and cannot impact others. Yes, if a Microservice is isolated, there is no need for regression tests. Sadly to the developers who promote isolation, Microservices in applications are not isolated but integrated and this is confirmed by Mr Fowler. This means that changed Microservices can impact their counterparts and they definitely do. If a new or updated Microservice is independently deployed being a part of the application, the application becomes distrustful until the regression test is conducted and passed.
To be on the safe side, when modifying a code in the Microservice, we definitely need to re-execute or re-develop the Unit-test for this code. Additionally, it means that when we run a Component test, we are assumed to run a Regression test within the Microservice because we do not know how the amended code could impact other code in the Microservice. Then, the Regression test for external Microservices is due.
If the Regression tests cannot be run by the team, it is not a reason to skip them – there should be a special team capable of writing tests inside and across Microservices.
Scalability Testing
This type of testing belongs to both Inside-out and Outside-in test realms. Scalability is partially tested in the Load and Stress tests. These tests verify the overall behaviour of the Microservice under the load. Scalability test aims to verification of the mechanism that is used for horizontal scalability – more requests can be handled by more instances of the Microservice.
In the containerised deployment, horizontal scalability depends on the scalability of the container platform. In each company or team, the container environment may differ and has to be verified against scalability.
This testing can be partially delegated to the infrastructure team to build the test, but it has to be executed by infrastructure and development teams together.
Security & Compliance tests
Security & Compliancetests are about business functional testing and related technology implementation. The GDPR (EU Regulation) demands security by design and development. Security and compliance are not what can be postponed to the time after the core code is written.
The Microservice code and its structure, as well as Microservice’s interfaces, should demonstrate compliance with required security protections. This relates to authentication of the consumer, i.e. an external/any other Microservice which wants to interact with yours, to authorisation of actions on your Microservice, to data integrity-protection-confidentiality as the least. The secured interaction between the Gateways and related Microservices is a must-have as well.
If two Microservices run in the application, this does not mean they are trustful to each other, specifically if they belong to different teams or different versions. The trust has to be established for each deployed version via compliance to the security controls. If your Microservice receives some data from external Microservice, it is the best practice to quarantine this data until it is validated against your quality policy for the Microservice-provider or consumer because of the data can contain hidden executable query. Also, validation of sender’s credential or equally reliable mechanism should be executed. Your company can lose much more money due to gaps in the security and compliance of your Microservice than the revenue that your hurried delivery might possibly bring.
Microservice QA vs. Pace of Delivery
In modern technology, the technology itself more and more depends on how developers work. This is a phenomenon unknown before. A common assumption is that the products, including SW products, have to have a quality that satisfies product consumers regardless of the convenience of producers. This relates to both tailored and mass-produced products.
For several decades, companies had and now have a Quality Assurance (QA) function. When SW development moved into the Agile Teams from the shared pool of ‘heads’, QA followed and appeared split and set under the team’s Agile model. For example, in the Agile Scrum team, a QA specialist is now supervised by the team’s Product Owner and even Scrum Master.
As it is known, each developer is obliged to write Unit-tests for own code and conduct testing until satisfied. Since Agile teams are small, ‘two-pizza teams’, the same developers have to collect and comprehend requirements, create designs, code and provide QA. A hope on full automation is elusive because requirements, design and coding are still in need of human work with no observable perspectives for becoming robotic in mass.
Enthusiasts who want to ‘empower’ development teams by packing all SW profession into the team have, probably, forgotten that each SW Engineering profession has its specific objectives that do not fit with a single Agile Sprint time-frame (my first-hand experience). Such packing compromises professionalism placing individual specialists under the pressure of all others who are specialists in something else. As known, “a jack of all trades is a master of none”.
In the small DevOps or Agile teams, we assume that each test is designed and executed by professionals. In other words, we assume that each member of the team is a professional tester. Is this right? Is this realistic? At least, this assumption contradicts the main goal of the team – speedy delivery – because each test takes time and prolongs the delivery period. What one QA specialist can do in the cultural environment where everyone else including the Team Lead pushes for speed?
All human experience teaches us that the producer and controller should be separated – a team producing a Microservice as an SW product may not be a controller of its quality. The objectives of the DevOps team is a speed of delivery while the objectives of QA is finding as many bugs as possible. Since the bugs should be fixed, this slows the delivery down. The simplest psychological conclusion is to find as fewer bugs as possible. (because improve quality of development is much more laborious and difficult when you distracted by other ‘Ops’ tasks. I do not want to hurt DevOps, but if a method allows cheating, it will always take place.
Several “informal surveys reveal that the percentage of bugs found by automated tests are surprisingly low. Projects with significant, well-designed automation efforts report that regression tests find about 15 percent of the total bugs reported”. The Regression tests are done after the majority of other tests completed – this finding demonstrates the quality of other tests performed by developers…
One of the possible solutions for the QA problem is having a QA Agile Team separate from the DevOps team. The QA specialists will be working in the DevOps teams but will be less influenced by the DevOps “speed”-objectives. So, the DevOps team is still responsible for conducting all due tests, but now this will be done under the control of QA specialist and all results become witnessed and public. No deployment in production would be possible without known bugs fixed. I have assessed this solution in real-world cases for Architects and Scrum Teams – it worked much better than when Architects were inside the Scrum Teams; the SAFe methodology fully confirms my recommendation.
A Way to Test – Instead of Conclusion
If you were so patient to read this article through, I hope you have noticed my disappointment with the recommended shorten checklist of tests for Microservices and permissions for avoiding tests if they run longer than a few minutes. Such irresponsible directions create highly risky situations in all companies where DevOps are allowed to follow these guidelines.
The Best Practices for DevOps articulate one of DevOps’ most important goals as “the automation of all rote tasks that don’t require human ingenuity”. Also “Integrating QA experts into your DevOps teams will empower the teams to decide best which aspects of testing can be automated and which ones are best tackled with the human touch”. This would be just great if the lashes of ‘simplicity/convenience’ and ‘test execution time’ were removed from the picture.
Regrettably, their presence results in the barely tested products appearing on the market making innocent consumers suffer. Why our consumer-centricity has been suddenly replaced by convenience-centricity for DevOps?
All Inside-out and Outside-in tests can be defined and applied to any Microservice with any invocation protocol. There are no, whatsoever, technical reasons preventing proper testing while poorly tested SW products can and usually do put companies into such financial troubles that no Time-to-Market can even compensate.
All the problems I identified and discussed in the article have one very simple solution: separate development from the control of its outcome. The QA specialists should be able to do their work – define and/or conduct and/or control testing – to the best of their abilities and without the time-pressure imposed by DevOps.
[1] Accessibility is a concept focuses on enabling access to an entity. For Microservices, this means that their API or fired event notification are accessible to other Microservices.
[2] The major downside of asynchronous interactions is unpredictable time when the receiver would act on the request. Event-based interaction models are the classic example of such unpredictability.
Leave a comment