In a recent workshop that I attended on Agile Engineering
Practices, as we began discussion on differentiating the roles of individuals
for acceptance test driven development (ATDD) and test-driven development (TDD),
question came up on what key performance indicators (KPI) are to be used. I
offered my viewpoint that the approach to deciding what key performance
indicator to measure is an incorrect strategy for any organization or business
unit.
Let us define what a KPI is. My interpretation of KPI is
that it is an objective measure of the efficiency with which a business process
is executed to deliver incremental value effectively to a business need. By
this definition, I see four contributing factors in two ordered pairs, which
are as follows:
- Business Need & Effectiveness (Objectives and Key Results [OKR] comes here)
- Business Process & Engagement (Critical Success Factors [CSF] comes here)
- Delivery and Operational Efficiency (Key Performance Indicators [KPI] comes here)
The OKR is a corporate level initiative that connects with the vision ("As is" to the "To be" state based on BHAG ideas). This may emerge from the portfolio function (identified in the business case) requiring what needs to be done across the organization (identified in the project/program charter) to align stakeholders, commit resources, and engage for success (that I call as ACE for charter). Consequently, each business unit comes up with its own critical success factors (CSF). In my opinion, the CSF is spread in at least 6 areas (Strategy, operations, product, marketing, finance, and people). Now, as programs and projects are chartered to deliver benefits through capabilities, enablers, and features in the product portfolio, the KPI measures the delivery and operational efficiency.
If we conceptualize all these areas together, then the overlap between each domain (OKR & CSF, OKR & KPI, CSF & KPI) is the Key Result Areas (KRA) that all business units collaborate vertically and horizontally (this is where value comes). Every interaction is loaded with several external and internal risks and so are associated with the risk assessment. This is one of the reasons why products, projects, programs, and portfolio have risk (as measured by Key Result Indicators) as the central vehicle throughout the organizations (OKR & CSF & KPI & KRA). Without understanding these interconnections, if we drive to use KPIs, then these are anti-KPI patterns because we are not measuring what matters!
Business needs differ across the organizations and
within the business unit in an organization. For example, consider schedule variance (SV).
This SV is a project management measure to see how the planned value (BCWP) of a
project compares to the actual value for that project (BCWS).
While this SV is a very good measure of an individual progress,
in regulated industries that manage portfolio or programs with multiple
dependent subprojects where the control on regulatory boards are external
constraints, the SV may not be an accurate measure of the performing
organization’s project efficiency unless such delays are removed from
consideration. So, if the goal is to measure the project health discounting
such external influences, then, such measures may not accurately address the
business need.
Now, when the right KPI is chosen based on what the
organization’s or business unit’s goal is, we can shift the attention to the
business process that is in place to address how efficiently it is serving up.
For instance, if we use number of forecasted projects successfully launched to actual project
launches when proper workflow, documentation, and versioning controls exist,
then, the focus can become more objective to measure why tasks slipped, how
projects were prioritized, etc.
As a result, whether it is a traditional software
development setting or iterative software development, the focus should be on
what is that we are trying to measure and why and then have the right processes
and tools in place to collect data for analysis. Collecting exhaustive data
accurately and applying an incorrect KPI will only lead to inaccurate assessment. For instance, measuring defects logged by tester would motivate the tester to log too many defects instead of consolidating them in one (like grammar, spelling, punctuation, formatting on a single line logged as multiple bugs). Everything countable does not always count, right!
I would like to start a list of KPIs for a Project, Program, & Portfolio Management
setting. This would be a great starter list. Please let me know what else should be
added.
- Schedule Variance and/or Schedule Performance Index
- Cost Variance and/or Cost Performance Index
- Number/Percentage of Projects Forecasted to Launched
- Number/Percentage of milestones reached/missed by project
- Number of FTE hours / project in a delivery-based schedule (no resource leveling)
- % of projects coming behind schedule
- Defect Density (Number of defects logged against requirements)
- % of failure rate against test cases
- % of test cases/requirements coverage
- Number of risks against requirements/test cases
- Prioritization of requirements/test cases based on risks
- Task Progress (against DoD)
- Extent of ATDD/BDD by Business Users (e.g.: exploratory tests)
- Number of Escaped Defects
- DORA metrics
- Cycle Time, Lead Time, WIP
- Estimation Accuracy
- Commitment Evaluation (e.g.: Planned vs Actual Velocity)
- Backlog Growth and Burn rate
- Risk Reduction Rate (Identified, Treated by Response Type, Residual)
- % projects on budget
- % of Challenging Projects/Program or Portfolio
- % of Change Orders $ to original SOW $
- Internal Rate of Return
- Net Present Value
- Payback Period
- Net Profit Margin
- Capacity Utilization Rate (e.g.: Profitability per Project Management Resource)
- Account Growth Revenue
- Cash Conversion Rate (Revenue Recognition Frequency)
- Concept to Cash Cycle Time
- Marketing Metrics
- % of Complaints over Product/Program
- Customer Satisfaction Index (e.g.: NPS)
- Training Efficiency
No comments:
Post a Comment