Search This Blog

Wednesday, December 31, 2025

Tools Don’t Create Great Scrum Masters—Leadership Does

Very recently, I was engaged in LinkedIn conversation! That LinkedIn post began listing a few commercial tools as the critical assets of a great Scrum Master! When I mentioned Scrum is mainly about promoting leadership, there was a comeback that leadership can coexist in the tools! Since social media is a place for free speech, I didn't extend the conversation much to create an unhealthy discussion! However, I decided to write my thoughts briefly in my monthly blog. 

Having helped many companies with their agile transformation and stood up many successful teams, I firmly believe that no commercial tool makes anyone a great Scrum Master, Project Manager, or leader. Neither has any tool promoted the self-organization emphasizing the critical Scrum Values - Courage, Focus, Openness, Respect, and Collaboration. Please don't get me wrong - a good tool can certainly help the team bring risk driven prioritization and enable visibility to the work in progress! A good tool can surface impediments earlier, promote single source of truth, and advance the total cost of ownership! 

However, over the years, I’ve seen organizations invest heavily in many siloed tools and use reporting dashboards weaponizing the metrics. Some examples include: 

  • Use one team's velocity as a benchmark for another team's performance
  • Use individual stories completed within the team creating a hierarchy instead of self-organization
  • Create more work with integrating documents in one tool (wiki) with stories in another tool (board), etc. 

What is going on here? People are missing the fundamental concepts of a framework. No framework (PMBOK, Agile, Scrum, Lean, Kanban, etc.) ever mentioned using one commercial tool! People's affinity and comfort zone towards certain tools make them believe agility would emerge once the tool was “properly configured.” What actually emerged was something else: beautifully documented dysfunction. I challenge this exact fallacy, that is, the belief that structure can replace thinking, process can substitute for leadership, and tool can provide both the thinking and leadership (Rajagopalan, 2025a). Thinking and leadership should continually evolve with external and internal business environment! 

In one engagement, a team proudly showcased an advanced Jira setup with custom workflows, automated rules, and detailed dashboards. Yet delivery was slowing, morale was low, and finger-pointing was high. When asked what problem the tool was meant to solve, the room went quiet. The use of the tool had become the goal (Rajagopalan, 2025b)- not using the tool to create a product, service, or result that the customer used! The leadership conversation had disappeared.

Scrum is about helping people dream, learn, and deliver more for customers

A great Scrum Master’s job is not to manage boards but to expand what the team believes is possible. Many scrum certification courses and microlearning modules emphasize servant leadership without truly even understanding the elements of servant leadership (Rajagopalan, 2024)! Having invested substantial time and researched these areas of leadership, I firmly believe this is where transformational leadership becomes essential. The Scrum Master must help teams dream bigger, learn faster, and deliver more meaningfully. Scrum Master does not do this by pushing people harder, but by leading differently.

I once coached a Scrum Master (Rajagopalan, 2025c) who believed the Scrum Master role was to “keep the team moving faster.” Velocity charts dominated every discussion. When we stepped back, it became clear that the team had stopped experimenting, stopped challenging assumptions, and stopped learning. Speed had replaced curiosity. This is exactly why I emphasize that thinking leadership over mechanical execution is inexorably pivotal because without learning, acceleration is just burnout in disguise.

The 4 I’s of Transformational Leadership show up daily in effective Scrum Mastery (Rajagopalan, 2014a). 

  • Individualized consideration is visible when a Scrum Master recognizes that a senior engineer disengaging in retrospectives is not a performance problem but a signal of unaddressed frustration. 
  • Inspirational motivation appears when sprint goals are framed around customer impact instead of ticket completion.
  • Intellectual stimulation surfaces when team members are coached to ask uncomfortable questions (Why do we accept this risk? Why is delivering this feature with bugs important over delivering something usable to the customer? What kinds of new persona of customers have we unearthed by these new functionalities?).
  • Idealized influence is the power skills that the Scrum Master demonstrates by walking the talk. 
These are conversations that does not happen in the tool that direct the team to deliver valuable outcomes. 

Tool Fetishism: When Comfort Zones Kill Agility

But, what happens when people gravitate on the tools galore without understanding the core empirical pillars like transparency, inspection, and adaption! This leads to Tool Fetishism that thrives when familiarity (learning has stopped) and judgment and comfort substitutes ways of working. I’ve seen teams defend suboptimal workflows simply because “that’s how <the tool> works.” In reality, that’s how they configured it, often years ago, under different constraints. 

The tool itself is not the culprit. The tool itself evolves over time! But, the internal security policies or lack of interest in upgrading the tool to benefit from its newer features or exploring tools to evaluate other tools make people think the tool is the failure!  

In one case, a team resisted adopting a newer collaboration tool that better supported discovery work, insisting Jira was sufficient. Discovery conversations were being forced into ticket comments, and learning slowed dramatically. The Scrum Master initially complied—until she realized comfort was masquerading as maturity. Challenging the status quo felt risky, but leadership demanded it. This mirrors a core theme in Leadership Unleashed: comfort zones are organizational blind spots.

Velocity misuse is one of the most damaging agile anti-patterns I encounter. Originally designed as a planning heuristic, velocity often becomes a proxy for productivity, discipline, or even individual performance. When that happens, teams respond rationally—to the wrong incentive. I once worked with a team (Rajagopalan, 2014b) whose velocity steadily increased while customer defects also rose. Stories were being split unnaturally, estimates inflated, and technical debt deferred—all to “meet expectations.” Jira showed success. Reality did not. This is a textbook example of metric-induced blindness (Rajagopalan, 2025a) when numbers replace judgment because data can be abused to tell a wrong story.

Pseudo-Agility: When Doing Agile Replaces Being Agile

The tool fetishism leads to pseudo-agility, which is not a failure of intent but a failure of leadership. Standups occur, PI plans are conducted, and retrospectives are documented. Ceremonies become ritualized with nothing fundamentally changing because mediocrity has become the new norm. Teams comply without committing, participate without owning, and deliver without learning.

In a global program spanning time zones, daily standups had become status broadcasts. No impediments surfaced, not because they didn’t exist, but because it wasn’t safe—or useful—to raise them. Once leadership shifted the focus from reporting to sense-making, the same ceremonies began producing real insights. The rituals didn’t change. The leadership did.

I believe tools are radiators with data to serve key metrics. As the old saying, "Garbage In, Garbage Out" goes, it is the working agreements, ways of working, and business processes that drive critical conversations! A transformational Scrum Master treats tools as conversation starters, not judgment tools. Metrics are invitations to inquiry, not verdicts. Dashboards raise questions rather than close discussions. 

Remember that although the manual screw driver works well, if we fail to see what other types of tools exist like a power drill, we will not be supporting the team to dream, learn, and deliver. I myself have challenged the leaders based on appreciative inquiry inside the organization and outside the organization to give up on the tools (Jira, OnTime, Mercury Quality Center, and numerous other processes like manual tracking) and leading the organization to be more productive on a better tool called SpiraTeam (Rajagopalan, 2012).

Leadership First. Tools Next. Agility cannot be installed, configured, or automated into existence. Tools can support agility, but they cannot rescue it. When leadership leads and tools follow, teams learn, adapt, and improve sustainably. What do you think? I am interested in your thoughts!

References

Rajagopalan, S. (2012, April). Leading Change: Remote Teams can collocate on the right ALM tools. Retrieved https://agilesriram.blogspot.com/2012/04/leading-change-remote-teams-can.html

Rajagopalan, S. (2014a, February). Ingredients of a Transformational Leader-What can you do? Retrieved https://agilesriram.blogspot.com/2014/02/ingredients-of-transformational-leader.html

Rajagopalan, S. (2014b November). Cost of Quality: The increasing value of acceptance testing besides automated testing. Retrieved https://agilesriram.blogspot.com/2014/11/cost-of-quality-increasing-value-of.html

Rajagopalan, S. (2024, January). Servant Leadership: Demystify the Agile Scrum Scaled Agile misconceptions. Retrieved https://agilesriram.blogspot.com/2024/01/servant-leadership-demystify-agile.html

Rajagopalan, S. (2025a). Leadership Unleashed: Game Changing Insights. Denver, CO: Outskirts Press.

Rajagopalan, S. (2025b, August). Resource Management: Stop Managing People-Start Managing Flow, Capacity, and Intent. Retrieved from https://agilesriram.blogspot.com/2025/08/resource-management-stop-managing.html

Rajagopalan, S. (2025c, July). When “Team Harmony” Becomes the Problem: A Scrum Master Coaching Moment. Retrieved https://agilesriram.blogspot.com/2025/07/when-team-harmony-becomes-problem-scrum.html

Rajagopalan, S. (2025d, October). Consensus Isn’t Alignment: Lessons from NGT, Delphi, and Wideband Delphi. Retrieved https://agilesriram.blogspot.com/2025/10/consensus-isnt-alignment-lessons-from.html

Monday, November 24, 2025

Navigating and leading team and stakeholder journeys

 Navigating Team and Stakeholder Journeys: A Dynamic Approach

Team development and stakeholder engagement both follow structured stages—Forming, Storming, Norming, Performing, and Adjourning for teams, and Unaware, Resistant, Neutral, Supporting, and Leading for stakeholders. While these stages appear linear in theory, real-world challenges can cause regression. Poor collaboration, change introduction, lack of risk awareness, shifting roles, or inconsistent practices can push teams and stakeholders backward, making proactive engagement essential for sustainable progress.

1. Forming & Unaware: Laying the Foundation
Just like teams may be anxious unaware of what the initiatives are for in the forming stage, some stakeholders may be unaware of how these initiatives may impact them or what their role would be in them. Clear communication is crucial—leaders must articulate vision, objectives, and expectations early. For teams, setting norms and fostering psychological safety can ease transitions. For stakeholders, awareness campaigns, informative sessions, and targeted messaging help move them from unawareness to engagement. Essential documents here include business case, benefits management plan, and project charter.

2. Storming & Resistant: Managing Conflicts and Concerns
As teams enter the storming phase, conflicts emerge over roles, responsibilities, and approaches. Similarly, stakeholders may resist changes that disrupt their routines or priorities. Leaders must actively listen, mediate conflicts, and address concerns with transparency. Encouraging open dialogue, clarifying goals, and demonstrating quick wins can mitigate resistance and build trust. Essential documents include project charter, risk register, stakeholder register, stakeholder engagement plan, and team charter with a clear project vision/scope statement.

3. Norming & Neutral: Aligning and Strengthening Commitment
Once conflicts subside, teams begin norming, establishing efficient workflows and mutual respect. Stakeholders, too, may shift to a neutral stance, neither opposing nor actively supporting efforts. Reinforcing alignment through continuous feedback, recognition, and collaborative decision-making strengthens commitment. Encouraging stakeholder input in planning processes fosters a sense of ownership and investment in outcomes. An essential aspect here is the updates to all the management documents in the storming phase and primarily the team charter that the team creates themselves. 

4. Performing & Supporting: Driving Synergy and Value
High-performing teams operate with synergy, leveraging strengths for optimal results. Stakeholders at this stage move from passive acceptance to active support. Leaders should empower teams with autonomy, provide resources for innovation, and celebrate successes. For stakeholders, demonstrating tangible benefits, inviting deeper collaboration, and showcasing impact stories can strengthen their advocacy.

5. Adjourning & Leading: Sustaining Engagement Beyond Completion
Teams eventually adjourn as projects conclude or objectives shift. Engaged stakeholders, however, can continue leading initiatives forward. Capturing lessons learned, maintaining relationships, and transitioning responsibilities smoothly ensure sustained engagement. Recognizing contributions, fostering ongoing dialogues, and preparing for future collaborations reinforce long-term partnerships.

By understanding the dynamic nature of both team and stakeholder journeys, leaders can anticipate regressions and proactively address challenges. Intentional engagement strategies help navigate transitions, ensuring both teams and stakeholders remain aligned, resilient, and forward-moving in the face of change.

Tuesday, October 14, 2025

Consensus Isn’t Alignment: Lessons from NGT, Delphi, and Wideband Delphi

When coaching a person for their PMP exam, a discussion emerged emphasizing that consensus meant alignment of expectations! Early in my career, I understood the expression, "the elephant in the room," where everyone in a collective setting fail to address a controversial issue despite the knowledge of its obvious existence. The comfort of not addressing it, bringing it to the surface due to the sensitivity of the issues, or even be labelled a naysayer for acknowledging it publicly were a few things I had observed are risks that impede value delivery! To me, silent agreement or unanimous consensus without any discussion is frequently a possible indication that important knowledge is not visible into the room and subsequently never makes it into the decision-making. 

That realization pushed me toward structured consensus techniques, not because I love process, but because I had seen what unstructured collaboration does under pressure. So, I explained three techniques, in particular, that have stayed with me over the years in light of a product that once I managed: These are the Nominal Group Technique (NGT), Delphi, and Wideband Delphi. Each earned its place through practical scars and are not juts theory.

Sometime around 2012, I was developing an innovative first-responder mobile application  for a pharmaceutical client. The application integrated with portable ECG collectors connected to an iPad and used in an ambulance. The goal was to capture ECG data from unconscious or incoherent patients, detect critical cardia conditions in real time and transmit actionable insights to hospital before arrival enabling immediate treatment including invasive interventions if needed. 

Developing this application was not just a product. It was a medical device ecosystem operating under clinical, technical, regulatory, and ethical constraints. I found out that everyone was competent and were knowledgeable but not everyone spoke the same language. Medical experts dominated the dialog about the criticality while engineers optimized the solution prematurely. Neither factored the regulatory risks until those members were specifically engaged. So, it was apparent to me that the 'unknown unknown' stayed unspoken sometimes due to the deference to the sensitivity or criticality of impact. 

Here is where I used the Nominal Group Technique (NGT). The goal was to get more breadth of features and risks to delivery without getting into the solution mode! I facilitated the interaction face-to-face in a combined setting using silent data generation (dot voting, brainstorming), undebated round-robin (brainwriting, Yes-And scenario writing), facilitated inter-group clarification (forming multiple teams of clinical, engineering, and regulatory members working together) to prioritize among diverse requirements. The advantage of this technique is that it is encouraged diverse participation and promoted consensus. It was not quick but picked up on many constraints, assumptions, risks, and dependencies. I particularly saw this NGT facilitation led to increased collaboration. For example: 

  • Clinicians highlighted most critical ECG patterns to focus on such as ventricular fibrillation, elevated myocardial infarction and a few others
  • Engineers emphasized battery drain and securing against signal interference risks
  • Designers questioned about the ambulatory users to design for stability and UX considerations when first responders were operating in a stressful environment with gloves and moving vehicles
  • Compliance specialists named medical, legal, and regulatory considerations for approval

Obviously, NGT didn't give us all the answers but it shaped the product needs better leading to the prioritization of minimum viable product. We diverged first across the problem space before converging on solution space. But, as the solution began taking space with more people involved, I found some began identifying requirements soon after the meetings were over. I found that dominant personalities in the room or the absence of anonymity were challenging for people to speak up in these facilitated settings. Here is where I found the Delphi technique come to the rescue.

The Delphi technique required people to raise their concerns in anonymous surveys and questionnaires. These surveys and questionnaire required careful design to avoid double-barred and leading questions but focused on identifying those risks across the lifecycle, regulatory and ethical edge cases, and most importantly the assumptions people were willing to challenge anonymously but not publicly. I would say that the anonymity brought known unknowns and unknown knows.  Designing the survey and questionnaire took time working with expert sometimes to ensure they collected the qualitative and quantitative data correctly and working iteratively to understand some answers. 

The Delphi technique didn't remove disagreements when the new risks and challenges unidentified in public settings were brought up. But, it removed the fear of identifying them and the bias associated with group thinking. As the development of the solution emerged, the focus shifted to ensuring that our solutions were built in such a way it maximized the regulatory approval and learning from the piloted first responders. Here is where I found the Wideband Delphi helpful. 

The Wideband Delphi is a hybrid technique. It combined the best of both the NGT and the Delphi technique. No longer was the focus on diverging to understand the problem space and converging to focus on solutions! No longer was the focus on power imbalances or biased interpretations leading to further risks as all the team members were in a 'performing' stage! But, as first responders identifying needs such as the UX needs to be simpler (fewer bigger buttons to click rather than nested menus) and iterative regulatory focus emerged (agreements on the details behind the ECG patterns), a mini Delphi approach to product backlog (missed documentation, design considerations) from experts along with a discussion to prioritize and estimate them followed. The planning poker and PERT are all Wideband Delphi techniques to facilitate them in a light-weight setting.  

It was wonderful to retrace my earlier product development experience to reconnect with these techniques on how we need to unearth risks. All these techniques have been time-tested and practiced in various settings whether or not we know them by their names! But, ignoring their benefits and thinking silent consensus is team alignment is not acknowledging the elephant in the room. 

The person I was coaching felt very grounded and satisfied. What do you think? What other techniques have you used or benefitted from?

Tuesday, September 2, 2025

When Risk Materializes at 35,000 Feet: A Lesson in Contingency, Leadership, and Empathy

Late last week, I was traveling from Mumbai to Ahmedabad, expecting a routine landing around 8:30 pm. Instead, nature had other plans. Severe lightning, heavy thunderstorms, and unsafe landing conditions meant our aircraft attempted to land three times and then made the only responsible decision left: divert to one of the nearest viable airport, Surat. This was not a hypothetical risk. This was risk materializing in real time.

To add to the emotional backdrop, Ahmedabad had recently witnessed a tragic Air India crash that killed almost everyone on board except one survivor. That reality was not lost on many of us sitting in that aircraft. As the pilot announced this diversion, I was talking with people near my seat where some were calm but afraid what would happen to the flight!

Once we landed in Surat, we were asked to remain seated on the tarmac while authorities coordinated next steps. The pilot stepped out of the cockpit and addressed us directly—calm, composed, and transparent. He explained that the risks avoided by not forcing a landing, the air traffic and safety regulations that prohibit deplaning without clearance, and their immediate next steps to identify safe and lawful alternatives before moving passengers. The pilot further announced that refreshments are being ordered from Surat to serve in the cabin as soon as possible and if anyone had further connecting flights that he was willing to talk with the authorities to evaluate options. 

This is a textbook example of risk response planning. As the primary risk of not landing in the destination city became an issue, the secondary risks were identified and analyzed working within the constraints of safety for all. And yet—this is where things started to unravel when risk management meets human behavior. 

As announcements continued, emotions in the cabin escalated.

  • People began raising their voices (raising is put mildly here).
  • Many formed informal lines in the aisle inside the aircraft inconveniencing others.
  • Others demanded to deplane because they “had friends in Surat.”
  • A few insisted this was the airline’s “fault” and demanded vouchers.
  • One passenger demanded that their checked-in luggage be retrieved immediately—on the tarmac because their Uber would be coming soon (what, to the tarmac?)

The emotional intelligence of this pilot operating under this pressure was commendable! He was patiently addressing that he was still working with the air traffic control (ATC) authorities as the opportunity to take off to Ahmedabad was very likely as the weather storm will clear soon. He emphasized that situation is still fluid and that the flight is officially not yet cancelled and no one can be deplaned unless there is a medical emergency or he got ATC confirmation.  

As if he jinxed the situation, he got on the call and quickly announced that another diverted aircraft had a medical emergency, requiring a passenger to be deplaned for urgent treatment. He said that the refreshments are on the way but the medical emergency is being prioritized. It created another uproar between the passengers where some complained that why someone can deplane in another plane but they can't deplane from this plane! Others were answering in their own words which part of the medical emergency they didn't understand! 

Amidst this chaos, the most surreal moment of all happened! I saw a few teenagers and adults pushing forward to take selfies with the pilot and airhostess. Really! It was disheartening to see safety and empathy deprioritized over social media hype! In that moment, I wasn’t just witnessing travel frustration. I was witnessing a complete breakdown in understanding how people break the systems under stress.

Here’s the uncomfortable truth: Most people are fine with processes until those processes inconvenience them. We celebrate safety protocols when they work quietly in the background as long as we are not impacted. We challenge them the moment they interfere with personal comfort, entitlement, or impatience.

What many failed to understand:

  • Contingency plans are not personalized
  • Exceptions are not scalable
  • Safety decisions cannot be crowd-sourced mid-crisis
  • Regulatory constraints are not negotiable based on emotion

In project management, product delivery, aviation, healthcare, crisis response or virtually any discipline or industry, the rule is the same:

"Once risk materializes, options narrow. Discipline increases. Flexibility decreases."

In that chaos, I also observed the pilot's leadership. I wish I had got his name but I didn't want to make things worse! He didn’t retreat behind the cockpit door; didn't lecture; didn't react emotionally to provocation. Instead, he patiently, politely, and assertively communicated what was known, clarified what was not yet possible, explained why safety and compliance came first, and remained empathetic without becoming authoritative. In my experience, great leaders don’t “cash in” on adversity. They rush to the front of the line and hold the line, even when it’s unpopular.

Everyone of us is also a leader! So, we need to practice leadership when adversity strikes us. Don't ever waste a risk! When an issue presents, practice the leadership hygiene on how to accommodate these contingencies that arise. Having only plan A is an unrealistic optimism. Having a plan B is a realistic professionalism. Knowing when to activate and respond to the issues that presents before us is leadership. Contingency and fallback plans exist precisely for moments when conditions change rapidly, emotions escalates, conflicts arise over individual preferences over group safety, and decisions are made with just-enough incomplete information! 

And yet, in corporate settings, I often hear:

  • “Let’s cross that bridge when we come to it”
  • “We’ll figure it out if it happens”
  • “That’s an edge case”

Until it isn’t. This practice makes us complacent leaving us to react unempathetically!

The real lesson from that flight wasn’t about aviation but leadership. It is about how humans process and respond when the control is taken away! If you respond with entitlement, blame, or opportunism, then, there is a continuous improvement opportunity. Learn to respond with patience, practice situational awareness, trust/appreciate/respect other's people expertise, process guidelines, and systems in place for larger society. 

True leadership shows up not when things go as planned, but when they don’t.

That night, safety won. Process won. Calm leadership won. And while not everyone appreciated it in the moment, everyone benefited from it.

How do you relate to my interpretation of this experience? What other experience, similar to or different from mine, have you experienced? I am looking forward to hearing from you!

Tuesday, August 26, 2025

Resource Management: Stop Managing People-Start Managing Flow, Capacity, and Intent

I was fortunate to travel to Melbourne, Sydney, Manilla and India as part of my work discussing good practices using a specific SaaS product in product, project, and program management processes. The industries ranged from banking, investment, FinTech, defense, transportation, non-profit sectors, and healthcare. One of the things that kept resurfacing is managing resources. Those that were using adaptive approaches want to use the metrics to push the velocity. Others that were following plan driven approaches thought of the people's unutilized capacity meant productivity loss. At the program level, people reasoned that all resources are interchangeable that they can be moved from one team to another across the projects in the program! 

I feel that rushing to use some metrics to measure resource utilization may also lead to failures. Resource management is not about people shortage or utilization of people's time. In fact, the resource management focus should be managing flow, capacity, and intent. So, let me elaborate with some examples. 

One team using an adaptive approach reasoned that the velocity was unstable and stories often spilled over to subsequent sprints. Their conclusion was that team members had maxed their available capacity! So, either new people have to be added or AI and automation must be factored.  I asked how much the backlog was ready with risks to delivery, clear definition of ready for pointing, and how much wiggle room for people existed for innovation and experimentation! The discussions led to the fact that features were big in the backlog without a clear articulation of value to the customer. So, the story points were not based on risks to delivery or a clear definition of done! Hypothetically, even if the team delivered 20% more, they were taking on technical debt - just enough to fill the velocity charts told the narrative the management wanted to see! How is the improved velocity valuable to the customer or business if the backlog was not ready? One important hidden lesson is that poor backlog readiness masquerades as a resource problem! 

Another plan-driven team used hours! Here, planning looked at number of people in the release, available hours/person between the start/end dates and made sure that everyone had 100% of the available capacity filled with tasks! In theory, great! But, the same team commented that the challenges were the work delivered had more defects and incorrect documentation for the users. Follow-up discussions pointed to inadequate time understanding the collaboration on the work (assumptions were made and assumption analysis was done) and metrics on number of defects logged/closed meant the increased defect density (defects/requirement) was demotivating! So, does utilization mean quality delivery?   

I always emphasize that we should measure what matters! This applies to resource management too as we shift focus on flow, capacity, and intent. For instance, we should come up with good leading indicators during release/phase or iteration/sprint planning to see how ready we are to maximize flow, utilize capacity, and improve intent. Here are some good resource management metrics to support this approach. 

MeasureDescriptionImpactSteps
Backlog Readiness Index% of upcoming backlog items meeting DoRIf this is low, it means:
  • Story spillover
  • Sprint churn
  • Scrap/Rework
  1. Define DoR (clear AC, risks, responses)
  2. Review next 1-2 sprints of backlog
  3. Count items meeting DoR (A)
  4. Count Total items reviewed in planning (B)
  5. Compute A/B
Capacity vs Demand RatioComparison between available team capacity and planned demandIf this is high, it means
  • Overcommitment
  • Under estimation
  • Team Burnout
  • Low Quality
  1. Calculate team capacity (days or points)(A)
  2. Sum planned work effort(B)
  3. Compute Demand(B)/Capacity(A) (Closer to 1, higher the risk)
Resource Availability Rate% of actual time available for planned workIf this is low, it means:
  • High context switching
  • Hidden operational work
  • Inadequate planning
  1. Identify capacity (A)
  2. Subtract meetings, support, shared work (Good practice is to allocate 20%-30% for these things) (B)
  3. Compute Available Time (B)/Capacity (A)
Unplanned Work RatioPortion of work that entered the sprint after planningIf this is high, it means:
  • Unstable Delivery
  • Discovery Delay
  • Weak Decision-Making (This may be either upstream or downstream issues)
  1. Track work added mid-sprint
  2. Compare effort vs planned work
  3. Express as %
Technical Debt Accrual RateRate at which new technical debt is accruingIf this is high, it often means:
  • Declining throughput
  • Delayed Design Issues
  • Disappointed Clients
  1. Tag unclosed items/release
  2. Track debt/release
  3. Project trend over time

At the end of the release, phase, iteration, or sprint review, measure delivery performance using lagging indicators. These includes time-to-market, defect density, feature adoption, schedule/cost variance (part of earned value management), and team's throughput. There are various other KPIs that can be used based on the industry, team maturity, product life cycle stage, etc. 

It is important to avoid some anti-patterns when measuring flow, capacity and intent. Some of the obvious patterns are around treating backlog readiness as paperwork exercise without truly spending time on it, planning releases with 100% capacity, ignoring unplanned work exceptions, accumulating technical debt without increased visibility, and measuring outcomes without tracking leading signals. 

Furthermore, more subtle anti-patterns exist too. These are: 

  • Treating DoR should not be treated as optional in the backlog to continue going faster. That only creates drift more than agility. 
  • Assuming shared resources will self-manage context switch within and across products.
  • Failing to adequately factor unplanned operational support work
  • Focusing on features and tracking defects without tracking the debt creation and customer value loss
  • Reviewing and addressing lagging metrics without focusing on prevention costs (Think CoQ)

What are your thoughts around this concept resource management at the product level? I am looking forward to your ideas. 


 

Tuesday, July 22, 2025

When “Team Harmony” Becomes the Problem: A Scrum Master Coaching Moment

 A Scrum Master I was tutoring came to me visibly frustrated.

“I think I need to push the team harder. They’ve become too comfortable. Everything is consensus-driven. Velocity is flat. Technical debt keeps growing. Stories keep rolling over. And now bugs are showing up because the Product Owner doesn’t give enough clarity.”

On the surface, this sounded familiar. Many leaders respond to this by tightening controls, adding pressure, or redefining accountability. But instead of jumping to how to push the team, I asked a different question:

“Why do you believe pushing the team is the right answer?”

That question changed the conversation. As we unpacked the situation, a pattern emerged. The team wasn’t lacking effort, intelligence, or intent. From the discussions, I felt that they were highly cooperative, polite, and aligned. At the least, they appeared to be. The emerging patterns were that the decisions were rarely challenged, estimates were accepted without debate, and risks never surfaced clearly. The Scrum Master was experiencing wasn’t laziness. It was group comfort. That’s when I introduced a concept from Japanese lean organizational culture: Mura Shakai (pronounced moo-rah-shah-kah-ee)

Mura Shakai is a cultural pattern often called as the 'village effect.' Contrary to the collaborative self-organized team's effort where the team drives to excel, this pattern is analogous to the collectivist's approach to respecting harmony and conformity where people avoid standing out because of the social risk it carries. The very fact that no risks surfaced or questions emerged on estimates or decisions mean that no individual wants to 'rock the boat!' 

I reasoned that people confuse agility thinking that asking powerful challenging questions or raising risks disrupt psychological safety! On the contrary, psychological safety does not mandate the absence of conflict but the presence of constructive disagreement. In our example case, the misunderstanding of avoiding conflict meant that all the observed challenges (stagnant velocity, rising technical debt, shifting stories, and PO blamed for bugs) were being socially filtered before they were truly visible. 

This microcosm of collectivist group thinking needs to be carefully addressed not by pointing fingers. So, pushing people hard would be a counter-productive move as it will only create more dissent, people gravitate more towards their zone, and defend their practices. Per the Speed Leas' (1985) conflict model, we are in level 3 - contest at a minimum.

Now that we had a better understanding of the actual problem, I introduced the A3 technique. The attention to detail, quick-win mentality, and connections one slide business canvas model has made this technique to be an easy solution to solve problem. A3 is a thinking discipline (similar deBono hats, for instance) applied at the team level forcing them to slow down, face the reality, and confront the uncomfortable issues collectively. So, how does this approach work?  

  1. A3 approach starts with reframing the problem! So, instead of telling the team is slow, I suggested stating, "stories are rescoped in mid-sprint", "work carries forward in multiple sprints" or "defects are discovered too late in the life cycle." Now, none of these statements are new to them but brings context into the problem. 
  2. The next step is to understand the impact on the current state. This is the "Go and See" approach also called as the Gembutsu (Rajagopalan, 2024). The goal here would be understand the impact of the reframed problem statements. Asking ourselves, "Where does this ambiguity come from?, How does this technical debt slow us down later? When does the team recognize this risk and why was it not raised?" I also cautioned the Scrum Master to practice active listening. 
  3. The next step is to perform root-cause analysis.  Ishikawa diagram, 5-Why, Influence Diagram, and many other techniques exist. The focus is two-fold. Not only identify the root causes but also identify potential set of solutions.
  4. The final step is to identify the action items on what can be changed. Suggest and coach but not direct action items, I emphasized. Once they identify the action items, owners, and the related changes, make them visible from the next iteration creating accountability!
I concluded the session recalling that Mura Shakai favors harmony and sometimes hides a team dysfunction. A3 technique reminds us to be structured about unearthing the dysfunction to the team and make accountability visible to the team.

References

Leas, S. B. (1985). Discover your conflict management style. Alban Institute.    

Rajagopalan, S. (2024). Quality Responsibility: 5G of Quality Audit. https://agilesriram.blogspot.com/2024/08/quality-responsibility-5g-of-quality.html

Sunday, June 15, 2025

Playing the CARD at Scale: Lessons from a Global Strategic Business Office

An opportunity presented itself for me to reflect on a mental model that I had developed a long time back. I had relied on this model to proactively sow the success seeds for any initiative that I worked on and also reactively address challenges as they surfaced. The opportunity came when I had a good friend who discussed with me about supply chain related failures for a drug portfolio where decision-making delays created strategic and operational challenges for a mid-level healthcare organization. 

While some early thoughts focused on teams lacking the capacity and capability, I didn't feel completely convinced. Over the years, as a VP leading a PMO with many project and program managers reporting under me, I noticed a recurring pattern: projects rarely failed because teams lacked effort or intelligence. They struggled because the realities of execution were not made explicit early enough. To address this, I often used a simple but powerful mental model called CARDConstraints, Assumptions, Risks, and Dependencies. Like a card in your pocket, it is something every mid-level manager should carry into planning conversations, steering committees, and day-to-day decision-making.

My thinking around CARD further matured significantly as my role evolved into VP of a Global Strategic Business Office (GSBO), where the PMO increasingly became a shared services capability rather than a standalone function. One arm of the GSBO focused on client-driven delivery programs, while the other governed the internal product and R&D portfolio across the organization. In this dual mandate—external execution and internal innovation—CARD became a pivotal enabler, helping us navigate strategy over a three-year horizon while still executing tactically at the project level.

Constraints are the non-negotiables of an initiative and strong managers surface them early and often. They act like the skeletal system defining the shape, structure, and boundary limits. The organization fails without the skeletal system and too much rigidity compromises agility. Beyond traditional limits like budget and timelines, constraints on a strategic side included leadership capacity, market timing, regulatory environments, and investment guardrails across portfolios. For client-driven programs, constraints were often contractual and immovable; for product and R&D initiatives, constraints showed up as funding thresholds, architectural decisions, or talent availability. 

In my GSBO initiatives, CARD helped ensure that constraints were explicitly acknowledged at the portfolio level, so teams didn’t overcommit locally and underdeliver globally. When surfaced early, constraints became tools for prioritization rather than excuses for delay. Every trade-off such as scope reduction, sequencing, and resourcing must explicitly reference which constraint is being protected. When constraints are invisible, teams make local optimizations that create global failure. Making constraints explicit creates realism, not pessimism.

Assumptions are where many plans quietly go wrong. These are statements we treat as true without proof. I view the assumptions as the nervous system and determines how the organization perceives the market and reacts to the signals. I found the assumptions were where the CARD created the most leverage especially in product and innovation work. Multi-year roadmaps are built on assumptions about customer adoption, technology maturity, data readiness, and organizational change. In the GSBO, we treated assumptions as testable hypotheses, particularly for R&D experiments embedded within the portfolio. 

Product managers and business analysts or the project and program managers were expected to articulate what must be true for success and define signals (triggers) that would invalidate those assumptions. Unchecked assumptions turn into surprise risks and surfaced assumptions turn into managed conversations. This discipline allowed leadership to course-correct portfolios early rather than defending plans that no longer matched reality.

Risks spanned both execution and strategy and provided the highest value in the GSBO. In fact, risks were the main thread that connects projects, programs, and portfolios in any enterprise. Risks differ from constraints and assumptions in that they are probabilistic events with a positive or negative impact! 

  • At the project level, risks were treated within the project's threshold but escalated when their cumulative and overall impacts exceeded the project boundary. 
  • At the program level, risks included benefit slippage, vendor reliability, and integration complexity.  The program level risks also were delegated to project level as needed. 
  • At the portfolio level, risks expanded to include concentration risk, innovation failure, market shifts, and opportunity cost. 
CARD helped management elevate the right risks that genuinely threatened strategic outcomes. More importantly, it encouraged explicit risk ownership and leadership decisions about which risks to mitigate/enhance, transfer/share, avoid/exploit, and finally which risks to consciously accept.

Dependencies were the connective tissue of the GSBO and are frequently the most underestimated element of CARD. Client programs depended on internal product roadmaps; product initiatives depended on shared platforms, data, and specialized skills; R&D experiments depended on leadership patience and protected funding. In my opinion, projects do not fail in isolation but across the integration points between teams, vendors, stakeholders, and systems. 

CARD forced visibility into these interdependencies and prevented siloed decision-making. For project and program managers, managing dependencies became less about tracking dates and more about orchestrating conversations, aligning incentives, and escalating decisively when assumptions broke down. It forced us to ask hard questions: Who do we rely on? Who relies on us? What happens if this slips?

CARD Element Body Metaphor Leadership Connection
ConstraintsSkeletalStructure, System, Boundary, Leverage
AssumptionsNervousInterpretation, Signals, relexive decision-making
RisksCirculatoryAwareness, Escalation, Flow
DependenciesConnectiveAlignment, Integration, Cohesion

Finally, CARD works because it forces clarity. Just like bad strategy may result from rigid bones and faulty nerves, execution can fail because of weak connective tissue and blocked circulation. So, constraints shape choices, assumptions test logic, risks demand foresight, and dependencies expose interconnections. When the management and leadership consistently apply CARD, status conversations become sharper, surprises decrease, and leadership trust increases. 

In my experience, you don’t need more templates or tools—just the discipline to play your CARD well, every time. What are your thoughts? How have you applied any of these thoughts? Please share your insights.

Saturday, May 31, 2025

Relocating Experience: A Real-Life Lesson in Fisher & Ury’s Negotiation Principles

I have lived in the cold-weather belt of the United States for a long time. Once school-related constraints for our children eased, our long-standing dream of relocating became more urgent. We had explored towns across multiple states for years, but in early 2025, the decision solidified: relocate from Boston to Dallas by the end of April.

This single decision triggered a cascade of negotiations—with my wife, children, employers, a realtor, neighbors who are like family, lenders, inspectors, attorneys, insurance companies, our landlord, the homeowners’ association, and relocation movers. With an aggressive 90-day window to find a home or abandon the move altogether, emotions and time pressure were real. Looking back, I realized how deeply Fisher and Ury’s (1981) negotiation principles shaped both our personal and professional interactions—often without us consciously naming them.

1. Separate the People from the Problem

Coordinating travel for house-hunting quickly became emotional. One son wanted to participate actively, the other was neutral. My wife had work constraints; my role was largely remote. Our realtor strongly recommended that both my wife and I travel together for weekend visits. Frustration surfaced quickly: comments like “You’re being inflexible” or “What’s the point of you coming?” began to creep in.

Instead of letting this become personal, we reframed the problem as a scheduling and logistics challenge—not a commitment issue. We explored late-night travel, inexpensive hotels farther from target neighborhoods, and tightly packed viewing schedules. By acknowledging constraints rather than assigning blame, we preserved trust and opened up workable options.

2. Focus on Interests, Not Positions

Positions were clear: who had to travel, who couldn’t, and how much time we had. Interests were deeper: inclusion, learning, health concerns, cost, and quality family time. One son was willing to rely on video walkthroughs; I wanted him involved as a learning experience. My younger son preferred an uninterrupted summer break.

Our realtor played a critical role here. She filtered homes based on unstated interests—including allergies and lifestyle needs—and pushed early conversations with banks because time, not indecision, was our biggest risk. While it felt premature at times, aligning around interests rather than rigid positions helped us move faster with fewer regrets.

3. Invent Options for Mutual Gain

When a promising house from the first trip fell through, our realtor suggested another visit the very next weekend—something none of us had planned for. Work schedules, travel fatigue, and prior commitments collided. Walking away was tempting.

Instead, we invented options. We shifted travel days, adjusted work commitments, relied on video participation, and leaned on a close neighbor to cover personal obligations. Even our realtor adjusted around her prior commitments. What made this work was a shared focus on the collective objective—successful relocation—rather than individual convenience. This was win-win thinking in action.

4. Insist on Objective Criteria

Once we identified the right house, negotiations intensified—offer price, inspection outcomes, mortgage rates, insurance, and legal reviews. Here, objective criteria anchored every decision. Market data, expert inspections, lender benchmarks, and legal guidance replaced assumptions and emotions. Credit is due largely to our realtor, who consistently grounded discussions in facts rather than pressure or opinion.

5. Know Your BATNA

All of these principles ultimately converged on one critical discipline: knowing when to walk away. Our BATNA was explicit—if we could not find an affordable home and close within 90 days, we would exit the relocation altogether. Having clear exit criteria prevented emotional escalation and preserved relationships, even when discussions became tense. As I often emphasize elsewhere, if stop conditions are unclear, negotiations drift—and often fail (Rajagopalan, 2025).

I am sure all of us encounter similar negotiation opportunities to reflect. What comes to your mind? Please share your thoughts.

References

Fisher, R. and Ury, W. (1981) William Ury. Getting to Yes: Negotiating agreement without giving in. 3rd ed. New York: Penguin Books.

Rajagopalan, S. (2016). Agility in Negotiation: Focusing on the “Why” behind mixing strategy with scenario. https://agilesriram.blogspot.com/2016/03/agility-in-negotiation-focusing-on-why.html

Rajagopalan, S. (2025). SEED: Understanding the warning triggers for failures. https://agilesriram.blogspot.com/2025/04/seed-understanding-warning-triggers-for.html

Saturday, April 19, 2025

SEED: Understanding the warning triggers for failures

I was facilitating a leadership course focusing on organizational transformation at Northeastern University! Learners came from Human Resources, Project Management, Informatics, and Leadership concentrations. In one of the discussions related to reasons for organizational failures, learners discussed initiatives failed because because of poor market research, management myopia, process overhead, ethical oversight and overreliance on technology instead of people. When I asked a few follow-up questions related to the reason for these failures, everyone narrowed in on bad strategy! 

How could strategy by itself fail? After all, strategy is to create a "competitive advantage" and failure lies in people not paying attention to the warning triggers and make the required course corrections! So, I explained to the learners the thoughts behind what I call as SEED warning triggers. This is an approach I developed over a period when managing the Program Management Office working with C-Suite directly and clients delivering numerous programs supporting a few portfolios. 

Success Measure: An organization is like a big family taking care of members of the family. Some family members may be children while others adults. What makes one happy may not make others happy as everyone has specific goals and objectives. If organization is a family, every member's longer term needs are like the organizational initiatives.  Most initiatives focus on achieving specific goals and objectives. So, this is frequently one of the things people identify in various documents, such as the business case, benefit management plan, benefit register, project charter, etc. Some metric driven organizations may have specific objectives & key results (OKR) to evaluate interim progress as well that are used in the governance framework to prioritize and optimize initiatives aligned to the enterprise environmental factors and related resource management needs as well.

Entry Consideration:  An organization is a living entity with every portfolio, program, and product a healthy organ working holistically to sustain life. It is therefore important to have interim reviews to evaluate the level of success achieved until then and reevaluate what initiatives should continue moving forward. Even a successful initiative that has met all the criteria may be parked for a different initiative because of the external and internal events. So, it is always better to have predefined milestones (it could be functionality or schedule based) to evaluate if the initiative should continue to enter its next phase based on its own success criteria met. 

Exit Criteria: One of the major challenges with any initiative is failing to understand, recognize, and act on exit criteria. In my humble opinion, this is one of the main warning triggers people ignore. If one's lifestyle changes causes an unhealthy situation (e.g.: prolonger exposure to construction work causes hearing loss, longer commutes to work causing family imbalance, toxic work environment causing increased stress), then, one has to make lifestyle changes which may sometimes involve looking for options (e.g.: a different role in the company, remote work options, an alternative job) to exit from the current challenges. Similar logic applies in a portfolio, program, or project where we should first have the exit criteria and monitor the warning signs to determine when it is time to "STOP." Continuing to do the same thing expecting different results is not smartness!

Decision Delay: This is the final and more challenging issue. Even when one knows that the heredity may have possible diabetes, people follow specific lifestyle (eating sugary foods) leading to diabetes diagnosis. Despite the diagnosis, they don't exit the current lifestyle options creating more challenges for their family members. Their reason is delaying the decision to act on the earlier warning signals. This delayed decision-making is the cancer that kills all the initiatives. When products delay customers' requests for newer functionality for a long time, for example, customers are dissatisfied (relate to Kano model here). When portfolios defer rebalancing the initiatives or reskill the employees and programs delegate program level risks constituent project sustaining benefits, such delays seal the failure of any initiative. This is where the skilled and timely governance really matters!

So, in the end, my SEED triggers are example of things that leaders should constantly monitor and manage. Otherwise, failures are just accidents waiting to happen. I felt some learners in the class felt thrilled to learn about this experiential approach. What are your thoughts? Please share.

Sunday, March 30, 2025

The science of estimation is an art rooted in risk management

I frequently find people mixing up estimation techniques between plan-driven and change-driven approaches thinking they are all completely different. While there are small changes, the art of estimation is simply looking at the level of accuracy when estimated and the extent of confidence in the given estimate for working on it! From that angle, the approaches like the analogous estimation, parametric estimation, triangular estimation, the special PERT estimation, affinity estimation, relative sizing, and storypointing can all be categoized into top-down, budget, and bottom-up estimation. 

Top Down Estimation
  • The top-down estimate is often based on gut feel. It draws on the experience of previous projects (hence analogous). It could be driven off of someone's expertise (expert judgment) or based on knowledge tracked in historical records (corporate knowledge base). It is frequently done at the early stages of a project (initiation) where minimal effort is required to get a feel for whether an initiative should be undertaken or not! As a result, the level of accuracy is very low (-25% to +75%). This is why this technique is called rough order of magnitude (ROM) or order of magnitude (OOM). 
  • In adapative projects where features representing a collection of stories and tasks not yet broken down are estimated, these collection of work closer to each other (hence affinity) are estimated in abstaction units such as T-shirt or Coffee-cup sizes. Hence such estimates are called affinity estimates in the backlog!

Budget Estimation
  • As the project continues with planning, we look at increasing our confidence in the estimate. So, we get down to decomposing the details, such as the parameters required to estimate or seek opinion from multiple experts to narrow our estimate. Since we apply the parameters (number of rooms to paint * amount of paint required * price per paint can; number of virtual machines on the cloud * number of active hours * price per hour), we call it parametric estimation. The details of the parameters applied vary based on the industry and the project. 
  • When it comes to seeking expert's opinion, instead of seeking estimate from estimate alone, we seek 3-points (optimistic, most likely, and pessimistic) to get the average. This average applies the statistical principle of central tendency to even out the variations. The same logic applies with PERT (program evaluation review technique) where multiple points are observed where the princple of normalization (bell curve logic) can be expected. So, PERT becomes a special case of triangular estimate. 
  • For adaptive projects, the parametric thoughts carry forward. So, a team looks at a feature in relation to another feature (either delivered or to be delivered). So, if the new feature is twice the size of another feature which is either of small T-shirt size or 3 points, then, the new feature is either medium size or 8 points. These estimates may apply at the release level backlog giving the definition of ready (DoR) following the DEEP (Detailed Appropriately, Estimable, Emergent, Prioritized) property.
  • Naturally, there is more time required to identify the parameters, seek opinions from multiple people, and perform this estimation. As the level of acuracy increases between -10% to +25%, the duration taken to estimate also increases.
Bottom Up Estimation
  • When the project continues in its later stages of planning, then, increased level of accuracy and confidence are required to allocate work to someone and spend the money. Consequently, teams engage in more granular breakdown of tasks (activities) or modules (function points) so that these activities can be estimated, dependencies understood, and lead/lag factored appropriately to compute the 'definitive' estimate.
  • In the case of adaptive projects, here is where the features or user stories are further broken following the INVEST (independent, negotiable, valuable, estimable, small, testable) property so that they are good candidates for sprint/iteration planning. In the sprint planning, the team does the cumulative estimate applying the central tendency to arrive at the stroy point (hence story pointing). The planning poker player at this stage is to faciliate the team level estimate as the expectation at the scrum level is that anyone should be able to pick up any story (cross-functional, multidisciplinary, T-shaped skills based, self-managed team). As work is constantly refined through backlog refinement, release level planning, and iteration level planning, the agile teams gain significant efficiency in sprint planning as the time taken to do this estimation is spread over time. 
Figure: Estimation Illustration by Sriram Rajagopalan


As you can see, the entire art of estimation is a classic risk management exercise where not only the management stakeholders but also the delivery team converge on the constraints, assumptions, risks, and dependencies (that I call as the CARD of the business goals and objectives). No wonder project management is not only a science but an art!

Thoughts? What do you think? 

Tuesday, February 4, 2025

ALM 3.0: Emerging role of the Modern Cost of Quality guiding the decision-making in the Human–AI ecosystem

For about a couple of years, I have been rethinking about the emergence of the modern cost of quality. Rooted in the total quality management principles, the cost of quality grouped prevention and appraisal costs under the cost of conformance and internal and external failure under the cost of non-conformance    (Boehm, 1981). I still believe that these approaches are very valid today but need to recognize reflective leadership (Rajagopalan, 2018) for additional layers of technology convergence (5th Industrial Revolution is already here) (Anil, 2025), strategic decision-making (value delivery is much more than product features) (Rothaermel, 2019), and the ethical decision-making (Trevino & Nelson, 2017; Canca, 2020; Kemell & Vakkuri, 2024) in products and tools (AI considerations like beneficence, non-maleficence, justice, autonomy) used in today's workforce as well as their impact on the customers (Rajagopalan, 2015) these products and tools serve. 

One of the premise behind my development of the modern cost of quality is that quality itself is no longer becoming an outcome of what is delivered. It is part of a continuously governed decision space! However, in practice people used application lifecycle management (ALM) tools in their plan-driven and change-driven approaches as a repository of artifacts to provide traceability and auditability in their compliance ecosystem! My synthesis of some observations are as follows:

  • Decisions are more important than artifacts
  • Quality is multidimensional and begins with needs assessment
  • Artificial Intelligence enabled solutions both help and harm value creation
  • Humans remain morally and economically accountable than machine workforce
  • Learning velocity matters more than delivery velocity

The mental model of ALM 1.0 that existed until the late l990 lasted for about 10 years where the mantra was to reduce the life cycle variance. Subsequently, software was considered as a controllable engineering artifact leading to the adoption of the staged software development life cycle (SDLC) approach (Royce, 1970) that relied on documentation with responsibilities delineated across business analysts, system analysts, engineers, quality professionals, and operations. The major challenge was the delayed decision-making and the traceability across these artifacts sometimes maintained across multiple disparate tools which were attributed to incorrect practices (Rajagopalan, 2014) more than the tools. 

The formation of Agile Manifesto in early 2000 (Cunningham, 2001) shaped the mindset for another 10+ years in adopting the adaptive approaches to software development. Subsequently the challenges of focusing on the single source of truth as non-negotiable led to the ALM 2.0 era where the mantra evolved to optimize flow across the cross-functional teams and customer proxy for faster feedback. This made software as a living product that evolved through feedback (Meadows, 2008; Checkland, 1999). Speed was more important than accuracy as continuous delivery in iterative cycles was expected to deliver the quality outcome! The SDLC stages gave in to stages like inception, elaboration, construction, and transition with overlap of responsibilities across the roles. So, the discovery of needs such as the backlog refinement was no longer one person's job and the solution engineering (design, development, delivery, and deployment) was a cross-functional team commitment traceable through integrated artifacts (Rajagopalan, 2019).

The ALM 2.0 was a significant improvement focused on flow and reducing waste! The earlier understanding of the customer needs with faster feedback loops paved the foundation for better solution. But, several anti-patterns continued (Rajagopalan, 2020). Consequently, the tools were abused and misused by siloed teams, role overlading and related ambiguity, lack of customer proximity. These challenges manifested as newer problems like the tool sprawl, ritualized ceremonies, velocity weaponization, and untraceable decisions

Despite agility's promises, practice reinvented the same problems just like how waterfall only existed originally in practice and never in theory (Rajagopalan, 2014). Some even thought fancy reports and KPI dashboards would point to the problem failing to recognize that this tool was not the villain and technology was not the problem. The issue was that the premature adoption of technology within the business environment without change management, talent re/upskilling with training, required documentation, provisioning of the access to the tools, and finally the time to execute (prevention costs) were the problem! People failed the processes that created better products!

As part of my consultation of ALM training, project and program training, and coaching and mentoring, I felt that people's resistance to learn and business' need to accelerate failed to reflect on answering powerful questions such as why was some feature prioritized over another, what risks were consciously treated, how optimized were we in monitoring the existing and emerging risks, what alternatives were rejected, and what assumptions could invalidate our continual decisions in value delivery. 

Before this fire could be addressed, the combination of market forces with 4IR technologies with products having cyber-physical interfaces, tool explosion with automated testing, and the emergence of AI with its generative AI capabilities created the perfect storm adding to the existing fire! Discussions in many conferences were not purely technical but how tools supported the decision quality, ethical quality, societal impact, trust quotient, learning debt, automation bias, and model risk! This was about the year 2021 as we were somewhat coming out of the fire of pandemic dragon! This was the time when I felt that the new ALM 3.0 era is emerging.

ALM 3.0 had a new mantra significantly different and also refreshing! This mantra was manage the cost associated with the risks of bad decisions amplified by AI.  I view this ALM 3.0 as a modern way of optimizing cost of quality for the human-AI ecosystem in 4IR and 5IR application lifecycle management space (Domin et al, 2024). It is not just a tool for artifacts to track and trace work. To me, ALM 3.0 reinvigorated the trust preservation framework (Canca, 2020; Kemmel & Vakkuri, 2024) that the ALM tools can promote to track the decision-memory and address the known, unknown, and unknowable risks across all the human (individuals, teams, stakeholders, stakeholder groups) and non-human resources (facilities, equipment, materials, infrastructure, and supplies). Here is my proposed ALM 3.0 architecture across the value delivery framework.


(c) Dr. Sriram Rajagopalan, ALM 3.0 Decision-Making Value Delivery Framework

As I presented these ideas in the Global Conference on Leadership and Project Management revalidating my thoughts, I began asking myself: how can this ALM 3.0 then scale and sustain itself? I felt that convinced that quality was a non-negotiable necessity across the industries globally underpinning the inexorable need to infuse the current cost of quality principles with the modern elements around quality engineering, quality audits, internal quality monitoring and external quality impact. And thus emerged my modern cost of quality foundation!


(c) Dr. Sriram Rajagopalan, Modern Cost of Quality Framework

  • The proactive preventions costs no longer rests on training, documentation, equipment, and time to do things right! It also integrates the AI's strength in requirements management, test case authoring, task writing, code writing, risk identification, and risk response planning, for instance. At the same time, treating machine resources (robots, machines, agents) as expendable resources could compromise efficiency! Yes, it is easy to start a new machine on the cloud but can we build a new robot that cleans the floors on the airports? The goal is to continuously learn and guides ourselves on the processes and decisions made. I call this component "Quality Engineering".
  • Appraisal costs are not limited monotonous testing and inspections! It expands on them to AI driven anomaly detection, automated and RPA testing, exploratory unscripted testing, and engaging digital twins for performance monitoring. I call this component "Quality Audit".
  • Internal failure costs are not rework and scrap work alone! We are now responsible for the costs of decisions delayed or deferred leading to quality slips in the customer's hands. This meant the ALM 3.0 required teams to identify, assess, and treat the non-functional requirements as part of application delivery to monitor triggers that may be hiding problems. 
    • Capturing these alerts (e.g.: Health check monitors) and doing AI assisted root cause analysis along with identifying and serving the risk response plans are critical. 
    • In manufacturing context, looking at materials proactively and ordering them to limit the WIP, forecast capacity and suggest design recommendations (increased load on machines requiring spinning up machines). 
    • With robots, self-autonomous cars, and newly created applications, we may have to evaluate the options for them to see how much calibrate themselves to reconfigure or heal themselves. 
    • All of these thoughts may require thinking of alerts (features that people don't see but are required for sustaining these products) and escalate them intelligently up/down the enterprise risk registers. This requires not only technology support but also retrain people on the business processes to be used in the adopted tools. I call this component "Internal Quality Monitoring".
  • External failure costs were no longer the challenges of the company dealing with lost business, liability, and warranty! In fact, they took on more support from automation and AI in the forms of:
    • Collecting telemetry data to guide what people wanted and where efforts spent didn't generate value (customer retention, hidden things customers can't find, etc.)
    • Filtering the signals in the market towards decision-making for products, project, programs, and portfolio. 
    • Continuously evaluating our AI models to justify options for customers who can't use the way the systems are designed (the ethical component of justice) and validate themselves (explainability, responsible AI compliance, etc.) I call this component "External Quality Impact".
    • Some of these thoughts, I realize, are forward looking! We are not there yet! But, I am sure regulations are catching up, market is moving faster, and customers are demanding more! The questions are: If AI is helping one work faster, why can't value be delivered to us faster too?
What do you think? Please share your opinions! I would love to hear from you as I fill my gaps further.

References
Anil, A.M (2025, Jan 25). Technology convergence is leading the way for the fifth industrial revolution. World Economic Forum. Retrieved from https://www.weforum.org/stories/2025/01/technology-convergence-is-leading-the-way-for-accelerated-innovation-in-emerging-technology-areas/

Boehm, B. W. (1981). Software engineering economics. Prentice Hall.

Bolman, L. G., and Deal, T. E. (2017). Reframing Organizations: Artistry, Choice, and Leadership (6th Ed.) CA: Wiley & Sons, Inc. 

Trevino, L., and Nelson, K. (2017). Managing Business Ethics: Straight Talk About How to Do It Right (7th Ed).  Hoboken, New Jersey: John Wiley & Sons.

Canca, C.: Operationalizing AI ethics principles. Commun. ACM 63(12), 18–21 (2020) Kemell, Kai-Kristian & Vakkuri, Ville. (2024). What Is the Cost of AI Ethics? Initial Conceptual Framework and Empirical Insights. 247-262.

Checkland, P. (1999). Systems thinking, systems practice. John Wiley & Sons.

Cunningham, W. (2001). Manifesto for Agile Software Development. Retrieved from  https://agilemanifesto.org/

Domin, H., Rossi, F. Goehring, B., Ganapini, M., Berente, N., & Bevilacqua, M. (2024, July 29). On the ROI of AI Ethics and Governance Investments: From loss aversion to value generation. California Review Management. https://cmr.berkeley.edu/2024/07/on-the-roi-of-ai-ethics-and-governance-investments-from-loss-aversion-to-value-generation/

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.

Highsmith, J. (2009). Agile project management: Creating innovative products (2nd ed.). Addison-Wesley.

Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing.
 
Rajagopalan, S. (2025). Leadership Unleashed: Game Changing Insights. Denver, CO: Outskirts Press.

Rajagopalan, S. (2020, Jan 11). What Patterns to avoid in Agile Ceremonies? Retrieved from  https://youtu.be/i7nR3gn34Go

Rajagopalan, S. (2019, Jan 30). Five Principles for Managing Your Application Lifecycle with SpiraTeam. Retrieved from https://youtu.be/s2Z41C4W1mU

Rajagopalan, S. (2018, June). Leadership Simplified: Leaders Must SLEEP. IEEE Engineering Management Review, 46 (2), 18-20. 

Rajagopalan, S. (2016, July). TONES: A reference framework for identifying skills and competencies and grooming talent to transform middle management through the field of Project Management. International Journal of Markets and Business Systems, 2(1), 3-24. 

Rajagopalan, S. (2015). Product personification: PARAG model to successful software product development. International Journal of Managing Value and Supply Chains, 6(1), 1-12. 

Rajagopalan, S. (2014). Review of the myths on original software development model. International Journal of Software Engineering & Applications, 5(16), 103-111.

Rothaermel, F. T. (2019). Strategic management (4th ed.). New York, NY: McGraw-Hill Education. 
Royce, W. W. (1970) “Managing the development of large software systems”, Proceedings, IEEE WESCON, August, pp 1-9.




Friday, January 17, 2025

The 5 I’s of Stakeholder Engagement: Building Stronger Connections

I often say, "Tell me about the stakeholders identified and your engagement strategy for your project, I will tell you how successful your project will be!" My thoughts were modelled after the saying, "Tell me your friends and I will tell you who you are!" Since stakeholders are people that can positively or negatively influence or be impacted by the project's outcome, it is important to understand the insurmountable role of stakeholders in delivering the 5P's (project, process, product, program, and portfolio) of value delivery! 

Furthermore, since stakholders are not members that we can directly manage, we need to see how we can engage them effectively. This is crucial not only for the 5P's success but also for strategic growth of the organization and fostering strong partnerships for verticial and horizontal growth. In this regard, I feel that there are five magic ingredients of stakeholder engagemen. These are interest, involvement, interdependencies, influence, and impact. These elements help leaders navigate complex relationships and align objectives facilitating execution as well as governance. Understanding these dimensions ensures that stakeholders remain engaged, informed, and motivated throughout the journey.

1. Interest (Care or Concern)
Stakeholders must have a clear interest in the project or initiative. Identifying what motivates them—be it financial returns, innovation, or social responsibility—helps shape engagement strategies. Here, I come up with both care and concern. Care gives a positive spin of stakeholders while concern may give the adverse considerations they may have. In both cases, if they are silent observers, creative ideas to problem solving and decision making are are left out. So, the extent of the involvement is the next thing to understand. 

2. Involvement
Once interest is established, involvement becomes key. Encouraging active participation through workshops, feedback loops, and collaborative decision-making strengthens their commitment and enhances project outcomes. While adaptive approaches talk about "buiness people and developers must engage on a daily basis" to emphasise their involvement, plan-driven approaches also promote similar thoughts of specific stage/phase gate reviews. In both these project delivery approaches, it is important to understand how much stakeholders are involved proactively! The sooner you understand this engagement, the earlier you address risks through preventive action. 

3. Interdependencies
Stakeholders do not exist in isolation. Business units have their own objectives as part of their goals to serve the organizational objectives. So, in all the elements of 5P's, the stakeholders interact with each other in various ways, impacting (which by the way is the fifth "I") project dynamics. Recognizing interdependencies allows for strategic alignment, reducing friction and maximizing collaboration. Preparing the people in advance of how our work impacts others builds the surround sound required for success.

4. Influence
The combination of interest, involvement, and interdependcies are not adequate if one does not support the overarching objetctives by championing change. Influence therefore connects with the behavioral change people can exercise not only by hiearchical authority but also by expertise they bring to the team. Influence determines how much power stakeholders wield over decisions. Understanding their authority, expertise, and networks helps in prioritizing engagement efforts effectively.

5. Impact
At the heart of stakeholder engagement is impact—how actions and decisions affect both the project and the stakeholders themselves. Effective engagement strategies create value for all parties, fostering trust and long-term relationships. By aligning interests, managing influence, and leveraging interdependencies, leaders can drive meaningful change, ensuring projects achieve sustainable success.

So, operating between the two bookends (interest and impact) lies the involvement, interdependencies, and influence. While techniques such as the stakeholder register, stakeholder engaement assessment matrix, power-interest grid, salience model, and stakeholder map exist, the stakeholder engagement is more of an art than science! It comes only with practice!

Don't you think so? Thoughts?