290 likes | 300 Views
Explore effective methods to measure and track the progress of Agile projects, ensuring key data visibility in an Agile environment. Learn about challenges, self-organized teams, consistent metrics, communication, Scrum, and more.
E N D
Practical ways to measure and track the progress of Agile projects Ensuring that key data is visible in an Agile environment Dave Browett – March 2013
I am a Project Manager! • These slides are a summary of experience collected over many years – more recently at Micro Focus • I’ve been managing Agile projects in various capacities since approx 2003 • This is not an Agile “primer” – I’m assuming that you all have a basic knowledge of Agile • Some of this is not for the Agile purist – I have included an appropriate warning…
Scrum is good for team communications Progress Plans Impediments Review Retrospective
Challenges • Should we be attempting to standardize these reports/communications? • Self-organized teams vs consistent metrics across teams • Providing senior management/stakeholders with high-level reports that allow • Progress against a release to be easily understood • Any “at risk” items to be flagged as early as possible • Any key dependencies/issues to be raised as early as possible
Self-organized teams vs consistent metrics across teams Should we standardise? • Communication of issues, dependencies etc • Metrics • Iteration length • Velocity Progress Plans Impediments Typically where there are more than one Scrum team any issues between them can be raised and resolved at a “Scrum of scrums”. How frequent these need to be will differ from project to project – the key thing is to provide this information in a timely fashion so it minimises the impact to other teams iterations
Challenge - Providing senior management/stakeholders with high-level reports • - Any key “at risk” items flagged as early as possible • - Any key dependencies/issues raised as early as possible • Progress against a release to be easily understood • Clearly defined payload • Ability to calculate how much of payload can be expected to be achieved based on historical/average velocity
Where are we? • “Our velocity is 40” • “We’ve done 280 story points” • “We’ve done 7 out of 9 iterations” • “We’ve spent 4000 man-hours” • Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect...
Where are we? Scope - Story Points Velocity =40 280 8 9 7 Time - iterations
Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect... Scope - Story Points Velocity =40 Must Have Payload =300 360 Looking Good! 280 8 9 7 Time - iterations
Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect... Scope - Story Points Velocity =40 360 Easy! 280 Must Have Payload =200 8 9 7 Time - iterations
Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect... Must Have Payload =400 Velocity =40 360 Challenged! 280 Scope - Story Points 8 9 7 Time - iterations
Payload Calculation - predictable delivery Scope - Story Points Best/worst case delivery range can be predicted within this zone Velocity =v2 Velocity =v1 MMF + y Worst case MMF Best case MMF - y Time - iterations
Predictable Velocity • Teams need to be able to deliver for each iteration a predictable number of story points • Obviously this number may vary from iteration to iteration depending on sickness/holiday etc but the key principle is that the team commit to and deliver a number of story points that is related to their performance in previous iterations. • “See-sawing” velocity is a warning sign – it could mean that the team are • The team are over-committing • The team are not estimating or looking ahead sufficiently • The team are not producing releasable software within the iteration • Beware of “iceberg agile”...
Beware of “Iceberg Agile”! • A key aspect of Agile is transparency. • Every iteration is displayed with each story and the tasks within are updated to show a picture that represents the state of the iteration as accurate and "up to the minute" as possible. • This transparency will build trust at all levels – • The Scrum team can show progress both in terms of achieved velocity and demonstrable features • Stakeholders/managers can take confidence from a regular cadence that provides demonstrable features • But if your team is doing "Iceberg Agile" this breaks down - not all the planned stories will be completed and the reported velocity will be lower than expected or swing from low to high…
Beware of “Iceberg Agile”! • The team may still believe they are on-track, the carried over stories couldn't be done in a single iteration, the work is still being done and it will all come together one or two more iterations down the line... • But the transparency has been lost • the work done on these carried over stories is hard to estimate • these stories can't be demonstrated in the review before they have been finished! • If your team is doing "Iceberg Agile" you will typically see • only a part of what was planned being demonstrated in the review, a significant amount of work will be under the surface • difficult to assess in terms of progress and not able to be demoed. • Carried over stories should be the exception rather than the rule and demoing stories should become a key consideration for assessing and accepting stories (in fact sometimes if you are wondering whether you have one story or two it's good practice to think of how you're going to demo the feature)
Beware of “Iceberg Agile”! • Teams that do "Iceberg Agile" will typically • Carry over several stories as common practice • Be unable to demonstrate all planned stories • Have a velocity that see-saws as the credit for carried over stories gets re-allocated one or two iterations down the line • These teams will suffer from lack of transparency and consequently it is difficult to predict what they are capable of consistently achieving. Be aware and try to avoid your team doing "Iceberg Agile"!
Velocity calculations across multiple teams – Agile Purist Warning! • Strictly you can’t simply “add-up” story points across teams (because each team is likely to have different measures) • Then again – surely it doesn’t make sense to have wildly different sp measures across teams… (find an agile purist near you and discuss!) • So – perhaps the pragmatic solution is to ensure that sp measures across teams are of the same order • Assuming the principle above is held then these payload calculations can be used for an entire project across several teams – as a “high level indication”.
Bear in mind also… • Estimation is always an inexact science! • Beware of false precision, “37.5sp remaining”
Possible actions for a challenged project • Increase resource – although adding resource to a team is likely to *reduce* its velocity in the short-term and bringing in a new team is also likely to require ramp-up/familiarisation. • We can increase velocity on new features by reducing velocity on other things… • Undertake a business review of “critical defects” • Temporary relaxation of Service Levels/SLAs • Reduce payload – business review of Must Have features
Team workload categories • To maximise velocity on new features we need to reduce velocity on other areas
Managing Payload – a balanced release Maximum Story Points achievable based on average total velocity Payload is in the AT RISK region Payload is in the CAUTION region Maximum Story Points achievable taking into account maintenance Payload is not threatened
Example 1 - Well balanced Release Maximum Story Points achievable based on average total velocity Maximum Story Points achievable taking into account maintenance Estimated MUST HAVE payload Key Estimated Non MUST HAVE payload AT RISK payload
Example 2 – Under committed Release (slack) Maximum Story Points achievable based on average total velocity Maximum Story Points achievable taking into account maintenance Estimated MUST HAVE payload Estimated Non MUST HAVE payload AT RISK payload Key
Example 3 – Over committed Release Maximum Story Points achievable based on average total velocity Maximum Story Points achievable taking into account maintenance Estimated MUST HAVE payload Key Estimated Non MUST HAVE payload AT RISK payload
Example 4 – Release with non MUST HAVE at Risk with additional Caution indicator Maximum Story Points achievable based on average total velocity Maximum Story Points achievable taking into account maintenance Estimated MUST HAVE payload Key Non MUST HAVE payload CAUTION payload AT RISK payload
Take aways • Think about the data that your teams provide to other scrum teams and stakeholders • Timely and Accurate • Impact of impediments • Think about % maintenance as being capacity which could be diverted to developing new features • Understand the importance of teams committing to an iteration and having a measurable team velocity that allows forecasts to be made • Clearly identifying MUST HAVE items makes a payload more realistic, achievable and with the stretch goal more clearly defined. • Beware of the signs of “iceberg agile” • Beware of false precision when providing estimates • Try to classify your release using the release balancing concept as early as you can once the MUST HAVE payload is sufficiently defined • Classic Project Management activities - such as calculating the critical path and understanding dependencies, are still needed in the Agile world!
Thank you – any questions?My blog on WordPress - http://davebrowettagile.wordpress.com/ David.Browett@microfocus.com