This is the story of how one of my previous teams has built a software and delivered all its features to the client.
My team was technically strong, so they have decided that the testing strategy would include only automatic tests. The scope of the project was initially closed. So, the risk that tests would have to be reimplemented was low, because of the change requests. This was a good start.
To make the tests implementation faster, automatic tests were immediately developed for every component. By doing so, whether it was an API, screen, event consumer, bpm process, or other, the backlog item was instantly closed. How wonderful!
The team implemented more than 5000 automatic tests, correcting some issues. And we delivered the project to the client for the User Acceptance Testing. We were confident that this would be a success!
A few days’ time, we started getting not so good news… in fact, bad news. Some bugs were found, there were several unmet requirements, problems in processes that included more than one component, and some status machine had issues. Finally, some solutions met the requested requirements, but some requirements no longer made sense to the customer.
We were stunned; we had implemented automatic tests for all the components. Why had this strategy failed causing so many problems?
To answer that question, we needed to analyse the methodology used to manage the project scope and its deliveries. And the test strategy.

So, we went through all the requirements with the client. It looked like everything was perfect, but because we hadn’t done this in the first place, we caused ourselves some problems:
- Some requirements no longer made sense for the client when we implemented them. If we had reviewed them with the client, we would probably conclude that other requirements would make more sense, or the same ones but with some differences.
- How could some requirements not be being met, despite having implemented a solution for those requirements? Because in the beginning of the project we did not understand the real needs and pains of the client (we thought we had!). If the topics had been revisited during the project, we could have had a better understanding of their needs and more knowledge on their business. Continuous communication with the client allows us to understand their real pains.
- We developed the project using sprints, but we only delivered all features to the client in the end of project. Hence, we did not receive any feedback during all the development phase, and because of that we didn’t adjust the features to the client’s real requirements.
Topics in this article
Automate didn’t kill manual tests.
So, about the test strategy, we made a huge mistake: to believe that having almost 100% coverage of Automatic Unit Tests (functional tests) would result in almost zero bugs. We were so naive!
When we automate tests we need to ensure that the test is correctly implemented and really tests the requirement. When this is done by the person that develops the feature, then tests are influenced by this person’s knowledge about the implementation, and not by the requirements.
For example, if the developer misunderstood a business rule or implemented it wrongly, the test that he’ll implement will be according to his understanding, so it will be wrong and it won’t validate the business rule when executed.
If another person implements the test, they will look for the business rule specification and do the test without being influenced by its implementation.
If the test is not correctly implemented, this is not an issue because it will fail when executed. Someone will analyse the feature (and the test) and will fix the component with problems.
On the other hand, unit tests are not enough. Integration tests and system tests are essential to ensure that the system works holistically. Even so, testing each component singly with a 100% coverage, does not mean that the system has no bugs.
In fact, there are still bugs that can exist, like errors on the design or specification, incorrect assumptions about the meaning, units, or boundaries of the data being passed between, failures in message interpretation between systems, or other.
To mitigate this, every time an interaction between two components, systems, packages or microservices is developed, a set of integration tests must be executed. These tests are strong candidates to automation.
Even when all integrations are working, the system testing must be executed because it “focuses on the behaviour and capabilities of a whole system or product”, validating that the system is complete and all processes work as expected.
For example, in a state machine implementation, sometimes some non-final statuses are forgotten and the transition event is not implemented. Other example is in event-driven systems, when sometimes an event is produced but no service consumes it and the process remains unfinished.
These kind of problems are usually found by executing system testing. These are the most important tests for any business, in my opinion. As ISTQB syllabus teaches us, testing includes checking whether the system meets specified requirements, but not only. It also involves validating whether the system will meet the client and its business needs in their operational environment(s). We need to understand if one process or functionality makes sense and works end-to-end.
What knowledge base to use?

Automatic testing is a strong testing tool, but it cannot replace all manual tests. We need to evaluate the whole system characteristics and environment to understand what tests we should execute manually, and what tests we should automate.
To decide the test strategy it’s important to understand the advantages and disadvantages of each approach.
Manual testing is a knowledge science activity that includes human judgment and capacity to analyse if something makes sense. If we do not execute manual testing, we lose the advantage of experience-based test techniques, and the human capacity to analyse if the solution makes sense for the business and meets the requirements.
Manual testing give us a chance to find extra bugs that automated tests would never find, because it allows us to follow that gut feeling that “something smells bad” and explore themes that may not have been tested or required.
Even when we decide to implement automated testing, we should test the software manually before running the automated tests. First, to be sure that automation is possible, and then to ensure that this automation is correct.
Automated testing is an exact science that ensures accurate results. It is software testing other software.
Testing a new feature manually could be fun, but testing the same features time and time again to prevent regression issues can be demotivating, leading to frustration and wasting time. That is why regression tests automation is a very important mechanism to save money and spare the test team.
And how about performance and load tests? It is insane to execute those tests manually! You could try but you’ll take a lifetime, and probably the tests won´t be accurate and the test coverage will be smaller. The greater the software and the more stable the feature scope, the greater value of testing automation.
When a part of the testing process is automated, you have many advantages:
- The productivity increases because the test execution is faster.
- Your confidence grows since you’re more reliable and error proofing.
- Your team is more efficient because the tests after implementation are repeatable without human intervention, and the team can use their energy on non-automated tasks.
Check the following table to see when each test is more suitable:

Bet your money on the right testing.
So, the one-million-dollar answer to the question on how 5000 automated tests were made and the project failed is simple: the test strategy was wrong!
We need to find the right balance between automated and manual testing, because each one has strengths and weaknesses. Their value only exists when they are applied in the right environment.
As an agile company, Polarising implements automated tests to save teams, time, and above all, to deliver high-quality software. However, to ensure that the solutions bring value and are the right ones to your business requirements, manual tests are also performed.
Márcia Catarino
Business Analyst
Links
https://www.testim.io/blog/test-automation-vs-manual-testing/
https://www.guru99.com/difference-automated-vs-manual-testing.html
https://www.testingcompany.com.br/blog/teste-manual-de-software/
http://www.tecnisys.com.br/noticias/2019/teste-manual-vs-teste-automatizado
https://www.softwaretestingmaterial.com/automation-testing-vs-manual-testing/