Search This Blog

Wednesday, December 31, 2014

Acceptance Testing: Demystified from a business perspective

Recently in building a RACI diagram, I found questions bubbling up on why acceptance testing must be done and why that’s testing group’s responsibility. I find it interesting to see how the responsibilities have morphed over the period on the roles and responsibilities associated with testing that I felt compelled to write this blog article with the illustration below.


When developers write the code, the code should be error free complying with the low level design. Since most of the code today is conceived and written in a modular fashion, thanks to object oriented analysis and design thinking, the unit testing done at this level is focused on individual modular units. Examples would be to check for memory leak, the use of uninitialized variables, etc. As several modules are consolidated within the package, the need to ensure the modules integrate at a higher design level comes calling for integration testing. Examples would be the passing of data between modules in a meaningful way, the use of calling external databases or webservices to ensure that there is adequate time given for timeout and exceptions so that the integrated code gracefully handles the situations. These types of testing are integral to the white box testing still at the engineering or development side focusing on the internal structure and design of the program rather than the functionality of the program.

Subsequently, the build (note that I am not using the word code) is transferred to the testers who check the workings of the program to the specifications. These testers still have some intimate technical knowledge, understanding of the database to write SQL queries or check webservices methods, write test scripts, etc. For custom projects, this group ensures conformance to requirements. This type of testing is part of the grey-box testing because the testers test for specifications but have additional access to tools for technical validation. They assure that the quality of the software agrees with the requirements agreed upon in the user stories or in the business requirements documents (BRDs). Here is where the gap begins within the management community.

Those that are familiar with Juran’s definition (the seminal though leader that laid foundation of software quality) also will relate to the need for “fitness for use” as another ingredient of quality assurance. Nobody other than the customer can actually evaluate this fitness test which is where the user acceptance testing (UAT) evolved. But, think about how the transfer of ownership is executed in an organization. There is inexorably communication from project or account managers in custom software development or product owners in the product development to customers that the software is ready for their UAT. This is where these business owners should be held responsible to execute black-box testing evaluating at a higher level and satisfying themselves that the software is working to the customer’s requirements. This is the last window of opportunity to identify any escaped defects before the actual customer gets involved in interfacing with the software. Then, any escaped defects the customers and end users find or new feature enhancements identified are gathered in product backlog grooming, risk adjusted for priority, and accommodated in the subsequent iterations.


This level of acceptance testing is slowly and steadily disappearing from project, program, account, and product owners leaving the image of the performing organization at risk in front of the customer leading to trust erosion. Every car goes through a lot of quality checks but still when we buy a car, we take it for a test drive. Why shouldn’t we test drive the software ourselves before we give it to our customers?