There are two prevalent themes in software development in
the corporate world: “Zero Quality Errors” and “Doing more with less”. The
dominance of both these concepts has critical importance in their implementation.
- Eliminating the number of testers increases the level of effort on the remaining testers to check every test case as thoroughly as possible introducing errors. The testers that have the accountability to ensure that they don’t release features without signing off are under pressure compromising the quality.
- Keeping more testing resources also does not guarantee zero quality when the testers don’t keep up with the current trends. The number of communication lines increase with the QA manager, test lead, offshore test coordinators, and testers. This functional hierarchy removes the testers from the developers defeating the self-organized team requirements. Consequently, the requirements dilute and morph leading to management problems as the customer complaints increase, time to market slips, and product reviews decline.
The logical solution is “Automated testing” making the
system do testing detecting more defects at earlier points in the development
life cycle as well as continuously testing deployed code for production bugs.
The solution is logical and practical as it accomplishes doing more
work with fewer resources consistently, continuously, and almost effortlessly
compared to the needs to have a human present to manually test.
Does that mean automated testing is a perfect solution where
we can enable computer assisted software tools (CAST) to as many testers as possible?
The agile engineering practices recommend automated testing but also emphasize
acceptance testing where the business owners also are involved in testing. But how far are our people in client-facing roles like product managers, project
managers, program managers, and account managers increasing their knowledge of
the business domain and related technical tools to test the releases? How much is the management attuned to this
fact?
- The client facing roles mentioned earlier may question why they should do this testing that the testing department is accountable for? It is a valid question but when buying a car why do you want a test run? Why do we do our own walk-through inspection of the home instead of leaving the work completely to the home inspector? We do this because we are equally responsible for the outcome. As these roles face the client who can claim escaped defects or the features for enhancement, how could these responsibilities have downplayed?
- Let us face another argument of being busy doing this acceptance testing! When automation is introduced, the developers and testers must write additional lines of code and test scripts to ensure that the automation works according to the 3A principle (Arrange what needs to be tested, Act by developing code to test, and assert by evaluating the outcome against the expected). This needs more time commitment and learning additional tools where the developers and testers need to immerse themselves to evolve to the expectations of today’s workforce. So, if one group that is busy can increase their competencies, why should not these client facing roles elevate their skill-competency gap instead of claiming the busy life?
- Another important angle to consider is new functional non-customer value add requirement but a business value-add requirement, such as the heartbeat monitors, exception log checks, and time taken to test checks as part of the automation efforts. None of these requirements are part of the actual product feature a customer sees but are additional scope of work that the business mandates on the execution wings to design, develop, and test. When these are baked into the level of effort or timeline and when customer asks to reduce the time to market, the client facing roles cripple the quality by not standing up for best practices.
If quality were a coin, automation testing and acceptance
testing are its two sides. Efforts spent only on one side won’t have the
completely desired economic impact. Automation
is a shift in the way the code is developed, tested, deployed, and monitored
requiring refined skills. It is an important element in reducing the cost of
quality but so is the acceptance testing that requires additional skills. If we
fail to recognize and implement both these effectively, then, the efforts spent in automation may be
offset by escaped defects due to lack of acceptance testing. A new breed of client facing roles is
therefore on the rise and the management needs to focus on both the automation
testing along with acceptance testing.