<<Architect for Developers>> series
Many years ago, even before I dreamed to become an Architect, I wrote an automated tool for mimicking manual user activities on a Web Page to test UX and application reactions to the user’s commands. That was very exciting experience. Nowadays, when Developers try to automate testing for their own code, I’ve asked myself whether almost 30-year testing experience in the industry is going down to a rubbish-can or I am fundamentally missing something.
All these years, it was “101 of QA” that the author of the code may test it only to the level of Unite-test while all other tests should be performed by people in the absence of the author-Developer and outside of the development environment. I even know companies that outsourced development to one company and hired another company for QA/testing. When I talked with modern DevOps and Developers that are going crazy about the speed of development and ready to produce barely working code for the sake of wider distribution, they pushed back asking me like ‘Why we need other people to test our code? We are your employees, why don’t you trust us?’ This is a provocative question – trust to a person has nothing to do with the trust to the outcome of this person’s work, which in the case of employment belongs to the company, not to the person. So, the answer is very simple – because we are all humans and humans have subconscious instincts for making cases for personal benefits. Also, not all business tasks and industries can afford a ‘mass-media pattern’ where quality and consistency are traded for the pace to market. But let’s tackle the problem of modern testing step by step, outside-in, and from the perspective of the Architect of the product.
The SW testing is as good as the tools for testing are good. There are many methods and testing tools are in the market. For example, tools like QTP or Selenium provide Functional and Performance testing; Cucumber provides Behaviour-Driven Development (BDD) where testing roots in the Test-Driven Design (TDD). There are many integration- and stress-tests as well as regression tests and tools, plus, specific security and domain tests. An architect should deal with all of them because s/he is responsible for that the development outcome to be compliant with requirements while PM and Tech Leads are responsible for the delivery (of what they can deliver).
Definitely, it would be nice if tools could read requirements and generate Test Cases. BDD tries to deliver for this dream. Has it really succeeded? Not sure and it is because of a few objective reasons.
The first of them is a probable discrepancy between business requirements, Architectural design solutions (that include much more concerns than business articulates but that are mandatory for producing SW products), and User Stories composed by PO and told to the Development Teams. I hope you have noticed that only very simple projects may allow themselves to expose business requirements to Developers directly. It is because only at such level of complexity a human brain can bridge the gap between the business need and the User Story and because such projects have low risks and small impacts on the company’s wellbeing. But in reality, companies cannot survive long in the market doing only small things. The bigger business task, the wider and deeper the gaps and higher risks of even partial development failure – a mismatch between what was needed and what has been delivered becomes a usual practice.
Architectural solutions usually produce several options at different levels of the overall design, which one allows more accurate analysis of whether particular requirement is efficient against corporate strategy and market trends or not, which one is less expensive, which one has lower risks, and so forth. Why is this so important and why this is not done quickly as Developers can do? – The answer is this: complex tasks require accurate thinking first, weighing alternatives and designing for “What if?” because a mistake made at the level of solution can cost the company hundreds and thousands time more than a bug in the code. I am not even talking about perfectly created individual components like Microservices can together result in a poor application if an integrity between them is omitted. Control over this integrity is also a responsibility of Architects working throughout the application and across Development Teams. At the end of the day, I would only appreciate quick quality development.
BDD testing requires much more than writing Test Scenario, it requires a deep understanding of the business task in its context with all dependencies and consequences. If Developers would dig into these aspects, they will not start writing code ever – it requires special methods, education, and skills that are not included in the development package. This is a prerogative of Architects. Moreover, BDD is predominantly focused on UX and UI – what testing will cover code behind the Web Page function and data processing?
Yes, many projects are narrowed and about creating user access to the stored data. This makes impression that everything in SW is set around connecting data in data stores and UI – Web or Mobile. This is simply wrong and recent massive transformation programs have proved this. IT faces tasks that one small team of DevOps cannot deliver; there is a need in splitting business tasks into function –and sub-function based verticals for more accurate development. But in such cases, Developers should not deploy their outcomes/code into production environment because different groups of code must be tested against joint behaviour, working together with stresses and negative testing in different points of application. Does it slow down the delivery of DevOps? Yes, definitely does, and it is right thing to do. Presenting badly designed and not properly tested product in the market is the same as “spit in eternity”. When more than one Team works on the task or the task is comprised more than one component developed by different people, the outcomes should be deployed in the Testing Environment for the cross-testing (of all types) and integral validation by Architects.
Talking about BDD and User Stories, I cannot miss Specification by Example. I personally hate examples because of the following reasons:
- if a person cannot explain me a task or a problem in lucid English, this means the person does not understand what s/he is talking about;
- if I am given 5 examples, I always suspect that there are other 5 important examples that are not articulated;
- if I am given an example instead of a consistent explanation, I cannot be sure that this example is representative and it makes sense to work on its basis.
A common case when PO comes to Developers and tells one story and, in a day, when the design is ready and coding has started, tells the second part of that story, and in a week, when the story is in production, s/he brings the third part, it is not Agile development, it is an Incompetence of the PO and improper collection of business needs/demands. One of the reasons for such situation is that the interviewer does not ask proper questions – top-to-bottom. If a given answer is top-to-bottom, it is usually easy suspect several parts of the task and design solution with appropriate hook even when other parts are not articulated. This is a standard practice in development of services and now Microservices – it is the only agility Developer has to have, i.e. to look into the future changes. This note contradicts a lean practice, which is de facto suitable only for the preliminary specified manufacturing.
I never worked in the Waterfall model, to me it was iterative-incremental RUP model (practically, an agile model). I know for sure that no one new requirement or new story may be taken in IT at once. The delivery manager or PO should make estimates on how long this new one will take, how it fits into the Sprint schedule, how long security and compliance check-ups would take, and alike. Also, in many places I worked, we built a special acceptance form for each requirement before the related story was formed. This form included mandatory questions to be answered:
- what outcome and value should be achieved if the requirement is implemented? If, instead, we received a requirement on how something should be achieved, it was denied right away)
- why the required outcome is needed (expected business effect)?
- who will use it (which team or process)?
- what business dependencies are
- what the suppliers of inputs are
- how the requirement relates to the corporate/divisional strategy (is it needed or this is a someone’s wish)?
- what would be the business consequences if the requirement were not realised.?
No MoSCoW was applied before answers to these questions were collected. As a result, we usually reduced the number of requirements by up to 40 percent. Is it slow again? Well, Development Teams are not in the business for satisfying “Christmas Wish Lists”, aren’t they?
I can recommend a work of Mr Colin Hammond, ScopeMaster® product, – the world’s first tool that analyses software requirements (using Natural Language Processing) against incompleteness, ambiguity and contradictions in the text in order to perform a task size measurement and Q.A. It is well-known fact that people act according to their own agendas that do not necessarily match the projects and company objectives. So, PO’s story may be good, but not full and not in the complete context. Everything has to be verified before code is written.
Special attention Developers should put on so-called negative-testing at the Unit-test level. If such testing is not included and/or not reviewed and approved by other people, no deployment is possible and the job cannot be accepted as done. The negative-testing, if someone does not know, is testing when anything and everything (one by one or in groups) is going wrong in the developed code. The outcome/code should provide solution to what to be done for each of the negative cases. I believe this type of tests has to be written by the Developer and validated by the QA person. One can say, ‘It is not in the business requirements’. Definitely, everyone, including business people, thinks first of all about a ‘sunny day’, but the products/outcomes that are incapable to reliably work in a ‘rainy day’ are the worse products in the market. If anything can break, it will certainly break.
People in a rush of delivery are weak at recognising exceptions or complex rules: they are either too ignorant to details and let anything fly or they postpone this to later, which never materialises, and everything goes to hades. Plus, if re-factoring, so popular in development, is applied to the already tested code, all tests must be repeated again. There is no run around.
Does it slow development/delivery? Yes, and this is because we do not need/want a delivery that is error-prone. Do not be afraid to talk to the manager or your PM or PO asking for more time for proper testing – the pressure on PM/PO for the extra time needed for proper testing will be much less than the pressure caused by angry customers when product failures happen in production. This becomes more apparent with the trend where DevOps becomes obliged for supporting their outcomes – the more production problems, the less new development.
Modern SW development deviates from UI-centric development more and more. Architects are only the people who are capable of solving middle-level and back-end technical problems in an ensemble and only then split them into smaller consistent tasks that Agile Teams can successfully realise. Thus, all Test Scenarios for middle-tier, security, robustness, manageability, scalability and interface-centric integration are Architectural tasks.
These considerations lead to a straightforward conclusion: Test Scenario for cases, which are more complex than trivial, should be either written by Architects, or the latter have to make significant contributions to them.
Another important thought relates to User Acceptance Testing (UAT). On one hand, a user is usually a human working with the application/system/product via its UI. On the other hand, UI is helpless without back-end if the task includes any business logic not directly included in UX. The business requirements for UI are articulated and transformed into User Stories. How frequently a PO formulates Stories for SW behind UI? The Cucumber tool follows BDD and generates Tests Cases for the Stories. Though we definitely need similar Test Case generation for the rest of the code, the problem is in that those Test Cases exist in isolation as the Stories do. Therefore, we have a risk of a disintegrated application composed of isolated components like Microservices as well as the tests can contradict each other. In BDD, no “4-eyes” validation is specified over produced Test Cases and nothing is responsible for their compliance with the requirements.
As a result, all these concerns expose that Developers alone may not write and, at the same time, justify the accuracy of the acceptance tests, including UAT. So, while Business Analysts, PO or Tech Leads work on User Stories for Developers about fine-grained UI tasks, Architects have to focus on the whole Acceptance Tests covering the full information and UX flows.
In almost all mid- to big-size projects, I’ve noticed that information collected from the lower operational level business personnel is not enough for developing the right technical implementation. The requirements/stories have to be articulated or written by the mid- or high-level business managers to be consistent and integral. Such people concern about final results, but they can be interviewed or asked to review business requirements only one or, maximum, two times. The agile idea of collaborative work between business and delivery team is a dream or a myth, which realises only in the small projects or means collaboration with low-level business operational stuff; the “Agile by-book” does not vertically scale. So, the Development Team, if it is realistic, should not count on multiple interactions with SME in bigger projects. Architects are the people who can work with higher managers professionally because they operate using abstractions of detail and address the business tasks in top-to bottom direction, i.e. in the same way as managers.
Sometimes, clients require working software to be produced almost immediately. If it is not a media/news domain, it is very suspicious and indicates either that the requirement is inappropriate/unapproved or this is a mistake of the management. In all such cases, due diligence is a must-have. If a Development Team is treated as a slave, I cannot help here. Otherwise, the PO should find what is really needed and for whom before composing a story (this is the situation where the process/protocol of requiring SW products can save Developers from the overtime work). A context, actually, an absence of context of such urgent demand is usually a reason for troubles in the future.
In the end, let me outline that SW Testing in the era of digitalisation does not lose a bit of its crucial nature. Due to the distribution of SW, strong testing becomes even more vital. Poorly tested SW products create troubles for customers instead of satisfaction. Do not trust the opinion that new young generation is more tolerant of bad products, which are produced with only goal of automation and speed to market. Young people may be patient when downloading new music, but they become frustrated the same way as others when their payment transactions fail. There is a huge category of SW-applied services and products that this goal of speed is not a goal at all; breaks in mass-media information distribution are accepted by people quite differently than they relate fridges, cars, messages or money transactions. In the majority of cases, just Unit-testing is not enough.
All good products have been well tested.