For about a couple of years, I have been rethinking about the emergence of the modern cost of quality. Rooted in the total quality management principles, the cost of quality grouped prevention and appraisal costs under the cost of conformance and internal and external failure under the cost of non-conformance (Boehm, 1981). I still believe that these approaches are very valid today but need to recognize reflective leadership (Rajagopalan, 2018) for additional layers of technology convergence (5th Industrial Revolution is already here) (Anil, 2025), strategic decision-making (value delivery is much more than product features) (Rothaermel, 2019), and the ethical decision-making (Trevino & Nelson, 2017; Canca, 2020; Kemell & Vakkuri, 2024) in products and tools (AI considerations like beneficence, non-maleficence, justice, autonomy) used in today's workforce as well as their impact on the customers (Rajagopalan, 2015) these products and tools serve.
One of the premise behind my development of the modern cost of quality is that quality itself is no longer becoming an outcome of what is delivered. It is part of a continuously governed decision space! However, in practice people used application lifecycle management (ALM) tools in their plan-driven and change-driven approaches as a repository of artifacts to provide traceability and auditability in their compliance ecosystem! My synthesis of some observations are as follows:
- Decisions are more important than artifacts
- Quality is multidimensional and begins with needs assessment
- Artificial Intelligence enabled solutions both help and harm value creation
- Humans remain morally and economically accountable than machine workforce
- Learning velocity matters more than delivery velocity
The mental model of ALM 1.0 that existed until the late l990 lasted for about 10 years where the mantra was to reduce the life cycle variance. Subsequently, software was considered as a controllable engineering artifact leading to the adoption of the staged software development life cycle (SDLC) approach (Royce, 1970) that relied on documentation with responsibilities delineated across business analysts, system analysts, engineers, quality professionals, and operations. The major challenge was the delayed decision-making and the traceability across these artifacts sometimes maintained across multiple disparate tools which were attributed to incorrect practices (Rajagopalan, 2014) more than the tools.
The formation of Agile Manifesto in early 2000 (Cunningham, 2001) shaped the mindset for another 10+ years in adopting the adaptive approaches to software development. Subsequently the challenges of focusing on the single source of truth as non-negotiable led to the ALM 2.0 era where the mantra evolved to optimize flow across the cross-functional teams and customer proxy for faster feedback. This made software as a living product that evolved through feedback (Meadows, 2008; Checkland, 1999). Speed was more important than accuracy as continuous delivery in iterative cycles was expected to deliver the quality outcome! The SDLC stages gave in to stages like inception, elaboration, construction, and transition with overlap of responsibilities across the roles. So, the discovery of needs such as the backlog refinement was no longer one person's job and the solution engineering (design, development, delivery, and deployment) was a cross-functional team commitment traceable through integrated artifacts (Rajagopalan, 2019).
The ALM 2.0 was a significant improvement focused on flow and reducing waste! The earlier understanding of the customer needs with faster feedback loops paved the foundation for better solution. But, several anti-patterns continued (Rajagopalan, 2020). Consequently, the tools were abused and misused by siloed teams, role overlading and related ambiguity, lack of customer proximity. These challenges manifested as newer problems like the tool sprawl, ritualized ceremonies, velocity weaponization, and untraceable decisions.
Despite agility's promises, practice reinvented the same problems just like how waterfall only existed originally in practice and never in theory (Rajagopalan, 2014). Some even thought fancy reports and KPI dashboards would point to the problem failing to recognize that this tool was not the villain and technology was not the problem. The issue was that the premature adoption of technology within the business environment without change management, talent re/upskilling with training, required documentation, provisioning of the access to the tools, and finally the time to execute (prevention costs) were the problem! People failed the processes that created better products!
As part of my consultation of ALM training, project and program training, and coaching and mentoring, I felt that people's resistance to learn and business' need to accelerate failed to reflect on answering powerful questions such as why was some feature prioritized over another, what risks were consciously treated, how optimized were we in monitoring the existing and emerging risks, what alternatives were rejected, and what assumptions could invalidate our continual decisions in value delivery.
Before this fire could be addressed, the combination of market forces with 4IR technologies with products having cyber-physical interfaces, tool explosion with automated testing, and the emergence of AI with its generative AI capabilities created the perfect storm adding to the existing fire! Discussions in many conferences were not purely technical but how tools supported the decision quality, ethical quality, societal impact, trust quotient, learning debt, automation bias, and model risk! This was about the year 2021 as we were somewhat coming out of the fire of pandemic dragon! This was the time when I felt that the new ALM 3.0 era is emerging.
ALM 3.0 had a new mantra significantly different and also refreshing! This mantra was manage the cost associated with the risks of bad decisions amplified by AI. I view this ALM 3.0 as a modern way of optimizing cost of quality for the human-AI ecosystem in 4IR and 5IR application lifecycle management space (Domin et al, 2024). It is not just a tool for artifacts to track and trace work. To me, ALM 3.0 reinvigorated the trust preservation framework (Canca, 2020; Kemmel & Vakkuri, 2024) that the ALM tools can promote to track the decision-memory and address the known, unknown, and unknowable risks across all the human (individuals, teams, stakeholders, stakeholder groups) and non-human resources (facilities, equipment, materials, infrastructure, and supplies). Here is my proposed ALM 3.0 architecture across the value delivery framework.
(c) Dr. Sriram Rajagopalan, ALM 3.0 Decision-Making Value Delivery Framework
As I presented these ideas in the Global Conference on Leadership and Project Management revalidating my thoughts, I began asking myself: how can this ALM 3.0 then scale and sustain itself? I felt that convinced that quality was a non-negotiable necessity across the industries globally underpinning the inexorable need to infuse the current cost of quality principles with the modern elements around quality engineering, quality audits, internal quality monitoring and external quality impact. And thus emerged my modern cost of quality foundation!
(c) Dr. Sriram Rajagopalan, Modern Cost of Quality Framework
- The proactive preventions costs no longer rests on training, documentation, equipment, and time to do things right! It also integrates the AI's strength in requirements management, test case authoring, task writing, code writing, risk identification, and risk response planning, for instance. At the same time, treating machine resources (robots, machines, agents) as expendable resources could compromise efficiency! Yes, it is easy to start a new machine on the cloud but can we build a new robot that cleans the floors on the airports? The goal is to continuously learn and guides ourselves on the processes and decisions made. I call this component "Quality Engineering".
- Appraisal costs are not limited monotonous testing and inspections! It expands on them to AI driven anomaly detection, automated and RPA testing, exploratory unscripted testing, and engaging digital twins for performance monitoring. I call this component "Quality Audit".
- Internal failure costs are not rework and scrap work alone! We are now responsible for the costs of decisions delayed or deferred leading to quality slips in the customer's hands. This meant the ALM 3.0 required teams to identify, assess, and treat the non-functional requirements as part of application delivery to monitor triggers that may be hiding problems.
- Capturing these alerts (e.g.: Health check monitors) and doing AI assisted root cause analysis along with identifying and serving the risk response plans are critical.
- In manufacturing context, looking at materials proactively and ordering them to limit the WIP, forecast capacity and suggest design recommendations (increased load on machines requiring spinning up machines).
- With robots, self-autonomous cars, and newly created applications, we may have to evaluate the options for them to see how much calibrate themselves to reconfigure or heal themselves.
- All of these thoughts may require thinking of alerts (features that people don't see but are required for sustaining these products) and escalate them intelligently up/down the enterprise risk registers. This requires not only technology support but also retrain people on the business processes to be used in the adopted tools. I call this component "Internal Quality Monitoring".
- External failure costs were no longer the challenges of the company dealing with lost business, liability, and warranty! In fact, they took on more support from automation and AI in the forms of:
- Collecting telemetry data to guide what people wanted and where efforts spent didn't generate value (customer retention, hidden things customers can't find, etc.)
- Filtering the signals in the market towards decision-making for products, project, programs, and portfolio.
- Continuously evaluating our AI models to justify options for customers who can't use the way the systems are designed (the ethical component of justice) and validate themselves (explainability, responsible AI compliance, etc.) I call this component "External Quality Impact".
- Some of these thoughts, I realize, are forward looking! We are not there yet! But, I am sure regulations are catching up, market is moving faster, and customers are demanding more! The questions are: If AI is helping one work faster, why can't value be delivered to us faster too?
Boehm, B. W. (1981). Software engineering economics. Prentice Hall.
Bolman, L. G., and Deal, T. E. (2017). Reframing Organizations: Artistry, Choice, and Leadership (6th Ed.) CA: Wiley & Sons, Inc.
Trevino, L., and Nelson, K. (2017). Managing Business Ethics: Straight Talk About
How to Do It Right (7th Ed). Hoboken, New Jersey:
John Wiley & Sons.


No comments:
Post a Comment