Search This Blog

Wednesday, April 30, 2014

Key Performance Indicator must start with business need

In a recent workshop that I attended on Agile Engineering Practices, as we began discussion on differentiating the roles of individuals for acceptance test driven development (ATDD) and test driven development (TDD), question came up on what key performance indicators (KPI) are to be used. I offered my view point that the approach to deciding what key performance indicator to measure is an incorrect strategy for any organization or business unit.

Let us define what a KPI is. My interpretation of KPI is that it is an objective measure of the efficiency with which a business process is executed to deliver incremental value effectively to a business need. By this definition, I see four contributing factors in two ordered pairs, which are as follows:
  1. Business Need & Effectiveness (Objectives and Key Results [OKR] comes here)
  2. Business Process & Engagement (Critical Success Factors [CSF] comes here)
  3. Delivery and Operational Efficiency (Key Performance Indicators [KPI] comes here)
The OKR is a corporate level initiative that connects with the vision ("As is" to the "To be" state based on BHAG ideas). This may emerge from the portfolio function (identified in the business case) requiring what needs to be done across the organization (identified in the project/program charter) to align stakeholders, commit resources, and engage for success (that I call as ACE for charter). Consequently, each business unit comes up with its own critical success factors (CSF). In my opinion, the CSF are spread in at least 6 areas (Strategy, operations, product, marketing, finance, and people). Now, as programs and projects are chartered to deliver benefits through capabilities, enablers, and features in the product portfolio, the KPI measures the delivery and operational efficiency.  

If we conceptualize all these areas together, then the overlap between each domain (OKR & CSF, OKR & KPI, CSF & KPI) is the Key Result Areas (KRA) that all business units collaborate vertically and horizontally (this is where value comes). Every interaction is loaded with several external and internal risks and so are associated with the risk assessment. This is one of the reasons why products, projects, programs, and portfolio have risk (as measured by Key Result Indicators) as the central vehicle throughout the organizations (OKR & CSF & KPI & KRA).  Without understanding these interconnections, if we drive to use KPIs, then these are anti-KPI patterns because we are not measuring what matters!


 Dr. Rajagopalan's Conceptualization of the relationship among OKR, CSF, KPI, KRA, and KRI

The business need differs across the organizations and within the business unit in an organization. For example, consider schedule variance (SV). This SV is a project management measure to see how the planned value (BCWP) of a project compares to the actual value for that project (BCWS).

While this SV is a very good measure of an individual progress, in regulated industries that manage portfolio or programs with multiple dependent subprojects where the control on regulatory boards are external constraints, the SV may not be an accurate measure of the performing organization’s project efficiency unless such delays are removed from consideration. So, if the goal is to measure the project health discounting such external influences, then, such measures may not accurately address the business need.

Now, when the right KPI is chosen based on what the organization’s or business unit’s goal is, we can shift the attention to the business process that is in place to address how efficiently it is serving up. For instance, if we use number of forecasted project launches to actual project launches when proper workflow, documentation, and versioning controls exist, then, the focus can become more objective to measure why tasks slipped, how projects were prioritized, etc.

As a result, whether it is a traditional software development setting or iterative software development, the focus should be on what is that we are trying to measure and why and then have the right processes and tools in place to collect data for analysis. Collecting exhaustive data accurately and apply an incorrect KPI will only lead to inaccurate assessment. For instance, measuring defects logged by tester would motivate the tester to log too many defects instead of consolidating them in one (like grammar, spelling, punctuation, formatting on a single line logged as multiple bugs). Everything countable does not always count, right! 

I would like to start a list of KPIs for a Project, Program, & Portfolio Management setting that would be a great list. Please let me know what you think should be added to this context.
  1. Schedule Variance and/or Schedule Performance Index
  2. Cost Variance and/or Cost Performance Index
  3. Number/Percentage of Projects Forecasted to Launched
  4. Number/Percentage of milestones reached/missed by project
  5. Number of FTE hours / project in a delivery based schedule (no resource leveling)
  6. % of projects coming behind schedule
  7. Defect Density (Number of defects logged against requirements)
  8. % of failure rate against test cases
  9. % of test cases/requirements coverage
  10. Number of risks against requirements/test cases
  11. Prioritization of requirements/test cases based on risks
  12. Task Progress (against DoD)
  13. Extent of ATDD/BDD by Business Users (e.g.: exploratory tests)
  14. Number of Escaped Defects
  15. DORA metrics
  16. Cycle Time, Lead Time, WIP
  17. Estimation Accuracy
  18. Commitment Evaluation (e.g.: Planned vs Actual Velocity)
  19. Backlog Growth and Burn rate
  20. Risk Reduction Rate (Identified, Treated by Response Type, Residual)
  21. % projects on budget
  22. % of Challenging Projects/Program or Portfolio
  23. % of Change Orders $ to original SOW $
  24. Internal Rate of Return
  25. Net Present Value
  26. Payback Period
  27. Net Profit Margin
  28. Capacity Utilization Rate (e.g.: Profitability per Project Management Resource)
  29. Account Growth Revenue
  30. Cash Conversion Rate (Revenue Recognition Frequency)
  31. Concept to Cash Cycle Time
  32. Marketing Metrics 
  33. % of Complaints over Product/Program
  34. Customer Satisfaction Index (e.g.: NPS)
  35. Training Efficiency