Agile

April FlowViz Updates

With one more weekend till the UK starts to reopen and us starting to move back towards normality, I again used some free time over the weekend to scratch some creative itches when it comes to FlowViz. 

Daily Work In Progress & Work Item Age

Increasingly I’m learning more and more through the #DrunkAgile series that teams focusing on WIP alone is not enough. Work Item Age should increasingly play a part in the importance of teams process/workflow. I started to tackle this in the previous release however I started to think more about what could be done in the visuals within FlowViz. I’ve also long loathed the Agile community’s love of Cumulative Flow Diagrams (CFDs) - yes I know they can show you your bottlenecks and can be used as a learning aid with Little’s Law, but I’ve found through experience that they’re simply not practical. In the past five years I would say I’ve experienced a 10:1 ratio of people who don’t get CFDs versus people who encourage and advocate their usage. This felt like a good time to remove it from the template and focus on something better, something more insightful, creative and actionable.

Daily WIP and Work Item Age.png

I got most of the inspiration for this visual (like most things in the Agile Data/Metrics world!) from Troy Magennis. He’s used something very similar in his team dashboard for a while, and given the data I needed was already there in Azure DevOps OData API it was pretty easy to do.

Daily WIP and Work Item Age2.png

This chart shows both the number of items in progress on a given date, as well as highlighting how old those respective items are. The chart groups the item age into ≤7 days, ≤14 days, ≤28 days and >28 days in progress. It allows teams to analyse two key factors when it comes to stability of flow in your system, helping them both maintain optimising WIP (factor one) and seeing when the age of open work grows (factor two). Teams should try to balance how high the bar is (WIP) and how orange the bars are (Age).

Information Panel

I’ve invested quite a bit of team into creating quite a thorough Wiki for anyone who downloads FlowViz. However I find most people tend to like to just download and figure it out for themselves. I recently watched a Guy in a Cube video where Adam showed a pretty cool way to create an information panel for your report. So I decided to do something similar!

Guide.gif

Now, even if people don’t make it to the Wiki, there is a small button that at least can help them if they do get stuck on any of the pages. It was pretty simple to make, just using Google Slides and ensuring transparent backgrounds were there. 

Screenshot 2021-04-10 at 22.00.45.png

The hard part was working it all into the existing bookmarks however, once I got in a rhythm it didn’t take too long. Check it out and let me know your thoughts!

Other Minor Updates

I also managed to rattle through some minor bug fixes, mainly fixing the board column order in the Work Item Age chart and paying off some tech debt in removing unused columns, changing formulas and reducing date ranges for certain queries. The Wiki has also been updated with various new screenshots and I’ve also added some guidance for TFS (I know!) users into the Wiki FAQs, as someone DM’d me on the Azure DevOps Club Slack group asking for help getting setup.
Finally, I’ve moved to experimenting with using the kanban board within GitHub for ideas on both new features and to give people a bit more insight as to what is being worked on or coming soon. Not quite there in keeping it updated but getting there :)  

Screenshot 2021-04-10 at 22.57.49.png

Anyway, the new version is now available - please use the GitHub repo or Azure DevOps Marketplace to download the latest version.

If you have any ideas yourself I’d love to hear them in the comments below or via the Discussion page on GitHub.

Product Metrics for Internal Teams

Disclaimer: this Post describes one way not the only way to
approach product metrics for internal teams

As our move from Project to Product gathers pace, it’s important that not only are we introducing a mindset shift and promoting different ways of working, but by doing so we also need to ensure that we are measuring things accordingly, as well as showcasing examples for others to help them on their journey. As Jon Smart points out, there is a tipping point in any approach to change where you start to cross the chasm, with people who are early/late majority wanting to see social proof of the new methods being implemented.

Screenshot 2020-06-05 at 09.38.17.png

I’ve noticed this becoming increasingly prevalent in training sessions and general coaching conversations, with the shift away from “what does this mean?” or “so who does that role?” to questions such as “so where are we in PwC doing this?” and “do you have a PwC example?”
These are trigger points that things are probably going well, as momentum is gathering and curiosity is growing, but it’s important that you have to hand specific examples in your context to gain buy-in. If you can’t provide ready-made examples from your own organisation then it’s likely your approach to new ways of working will only go so far.

This week I’ve been experimenting around with how we measure the impact and outcomes of one of the Products I’ve taken a Product Manager role on (#EatYourOwnDogFood). Team Health Check is a web app that allows teams to conduct anonymous health checks with regards to their ways of working, using it to identify experiments they want to run to improve areas, or identify trends around things that may or may not be working for them. Our first release of the app took place in December, with some teams adopting it.

Screenshot 2020-06-05 at 08.52.47.png

In a project model, that would be it and we’d be done, however we know that software being done is like lawn being mowed. If it’s a product, then it should be long-lived, in use and leading to better outcomes. So, with this in mind, we have to incorporate this when it comes to our product metrics we want to track.

Adoption & Usage

One of the first things to measure is adoption. I settled on three main metrics to track for this, the number of teams who have completed a team health check, adoption across different PwC territories and repeat usage by teams. 

Untitled.png

This way I can see what the adoption has been like in the UK, which is where I’m based and where it’s predominantly marketed, compared to other territories where I make people aware of it but don’t exactly exert myself in promoting it. The hypothesis being you’d expect to see mostly UK teams using it. I also then can get a sense as to the number of teams who have used it (to promote the continued investment in it) and see which teams are repeat users, which I would associate with them seeing the value in it.

Untitled2.png

Software Delivery Performance

We also want to look at technical metrics, as we want to see how we’re doing from a software delivery performance perspective. In my view, the best source for this are Software Delivery Performance metrics presented each year as part of the State of DevOps/DORA report.

I’m particularly biased towards these as they have been formulated through years of years with thousands of organisations and software professionals, with them proven directly correlate with different levels of software delivery performance. These are actually really hard to track! So I had to get a bit creative with them. For our app we have a specific task in our pipeline associated with a production deployment which thankfully has a timestamp in the Azure DevOps database, as well as a success/failure data point.
Using this we can determine two of those four metrics - Deployment Frequency (for your application how often do you deploy code to production or release it to end users) and Change Failure Rate (what percentage of changes to production or released to users result in degraded service and subsequently require remediation).

So looks like currently we’re a Medium-ish performer for Deployment Frequency / Elite performer in Change Failure Rate, which is ok for what the app is, its size and its purpose. It also prompts some questions around our changes, is our batch (deployment) size too big? Should we in fact be doing even smaller changes more frequently? If we did could that negatively impact change failure rate? How much would it impact it? All good, healthy questions informed by the data.

Feedback

Another important aspect to measure is feedback. The bottom section of the app has a simple Net Promoter Score style question for people completing the form, as well as an optional free text field to offer comments.

Screenshot 2020-06-05 at 09.19.31.png

Whilst the majority of people leave this blank, it has been useful in identifying themes for features people would like to see, which I do track in a separate page:

Screenshot (244).png

Looking at this actually informed our most recent May 20th release, as we revamped the UI, changing the banner image and radio button scale from three buttons to four, as well making the site mobile compatible.

Screenshot (246).png

I also visualise the NPS results, which proved for some interesting responses! I’d love to know what typical scores are for measuring NPS of software, but it’s fair to say it was a humbling experience once I gathered the results!

The point of course is that rather than viewing this as your failure, use it to inform what you do next and/or as a counter metric. For me, I’m pleased the adoption numbers are high, but clearly the NPS score shows we have work to do in making it a more enjoyable experience for people completing the form. Are there some hidden themes in the feedback? Are we missing something? Maybe we should do some user interviews? All good questions that the data has informed.

Screenshot (242).png

Cost

Finally we look at cost, which is of course extremely important. There are two elements to this, the cost of the people who build and support the software, and any associated cloud costs. At the moment we have an interim solution of an extract of peoples timesheets to give us the people costs per week, which I’ve tweaked for the purpose of this post with some dummy numbers. A gap we still have are the cloud costs, as I’m struggling to pull through the Azure costs into Power BI, but hopefully it’s just user error.

We can then use this to compare the cost vs all other aspects, justifying whether or not the software is worth the continued investment and/or meeting the needs of the organisation.

Overall the end result looks like this:

Screenshot (248).png

Like I said, this isn’t intended to be something prescriptive - more that it provides an example of how it can be done and how we are doing it in a particular context for a particular product.

Keen to hear the thoughts of others - what is missing? What would you like to see given the software and its purpose? Anything we should get rid of?
Leave your comments/feedback below.

Weeknotes #40 - Product Management & 2019 Reflections

Product Management Training

This week we had a run through from Rachel of our new Product Management training course that she has put together for our budding Product Managers. I really enjoyed going through it as a team (especially using our co-working space in More London) and viewing the actual content itself.

Credits: Jon Greatbatch for photo “This can be for your weeknotes”

What I really liked about the course was the fact the attendees are going to be very ‘hands-on’ during the training, and will get to go apply various techniques that PdM’s use with a case study of Delete My Data (DMD) throughout. It’s something that I’ve struggled with when putting together material in the past of having an ‘incremental’ case study that builds through the day, so glad that Rachel has put something like this together. We’ve earmarked the 28th Jan to be the first session we run, with it being a combination of our own team and those moving into Product Management being the ‘guinea pigs’ for the first session.

2019 Reflections

This week has been a particularly challenging week, with lots of roadblocks in the way of moving forward. A lack of alignment in new teams with future direction, and lack of communication to the wider function around our move to new ways of working means that it feels like we aren’t seeing the progress we should be, or creating a sense of urgency. Whilst it’s certainly true around achieving big through small, it does feel that with change initiatives it can feel like you are moving too slow, which is the current lull we’re in. After a few days feeling quite down I took some time out to reflect on 2019, and what we have achieved, such as:

  • Delivering a combined 49 training courses on Agile, Lean and Azure DevOps

  • Trained a total of 789 PwC staff across three continents

  • Becoming authorised trainers to offer an industry recognised course

  • Actually building our first, proper CI/CD web apps as PoC’s

  • Introducing automated security tools and (nearly) setting up ServiceNow change management integration to #TakeAwayTheExcuses for not adopting Agile

  • Hiring our first ever Product Manager (Shout out Rachel)

  • Getting our first ever Agile Delivery Manager seconded over from Consulting (Shout out Stefano)

  • Our team winning a UK IT Award for Making A Difference

  • Agreement from leadership on moving from Project to Product, as part of our adoption of new ways of working

All in all, it’s fair to say we’ve made big strides forward this year, I just hope the momentum continues into 2020. A big thank you from me goes to Jon, Marie, James, Dan, Andy, Rachel and Stefano for not just their hard work, but for being constant sources of inspiration throughout the year.

Xmas Break

Finally, I’ll be taking a break from writing these #Weeknotes till the new year. Even though I’ll be working over the Christmas period, I don’t think there’ll be too much activity to write about! For anyone still reading this far in(!), have a great Christmas and New Year.

Weeknotes #39 - Agile not WAgile

Agile not WAgile

This week we’ve been reviewing a number of our projects that are tagged as being delivered using Agile ways of working within our main delivery portfolio. Whilst we ultimately do want to shift from project to product, we recognise that right now we’re still doing a lot of ‘project-y’ style of delivery, and that this will never completely go away. So we’re trying to in parallel at least get people familiar with what Agile delivery is all about, even if delivering from a project perspective.

The catalyst really for this was one of our charts where we look at the work being started and the split between which of that is Agile (blue line) Vs. Waterfall (orange line).

The aspiration being of course that with a strategic goal to be ‘agile by default’ the chart should indeed look something like it does here, with the orange line only slightly creeping up when needed but generally people looking to adopt Agile as much as they can.

When I saw the chart looking like the above last week I must admit, I got suspicious! I felt that we definitely were not noticing the changes in behaviours, mindset and outcomes that the chart would suggest, which prompted a more thorough review.

The review was not intended to act as the Agile police(!), as we very much want to help people in moving to new ways of working, but to really make sure people had understood correctly around what Agile at its core really is about, and if they are indeed doing that as part of their projects.

The review is still ongoing, but currently it looks like so (changing the waterfall/agile field retrospectively updates the chart):

The main problems observed being things such as lack of frequent delivery, with project teams still doing one big deployment to production at the end before going ‘live’ (but lots of deployments to test environments). Projects are maybe using tools such as Azure DevOps and some form of Agile events (maybe daily scrums), but work is still being delivered in phases (Dev / Test / UAT / Live). As well as this, a common theme was not getting early feedback and changing direction/priorities based on that (hardly a surprise if you are infrequently getting stuff into production!).

Inspired by the Agile BS detector from the US Department of Defense, I prepared a one-pager to help people quickly understand if their application of Agile to their projects is right, or if they need to rethink their approach:

Here’s hoping the blue line goes up, but against some of that criteria above, or at least we get more people approaching us for help in how to get there.

Team Health Check

This week we had our sprint review for the project our grads are working on, helping develop a team health check web app for teams to conduct monthly self assessments as to different areas of team needs and ways of working.

Again, I was blown away by what the team had managed to achieve this sprint. Not only had they managed to go from a very basic, black and white version of the app to a fully PwC branded version.

They’ve also successfully worked with Dave (aka DevOps Dave) to configure a full CI/CD pipeline for any future changes made. As the PO for the project I’ll now be in control of any future releases via the release gate in Azure DevOps, very impressive stuff! Hopefully now we can share more widely and get teams using it.

Next Week

Next week will be the last weeknotes for a few weeks, whilst we all recharge and eat lots over Christmas. Looking at finalising training for the new year and getting a run through from Rachel in our team of our new Product Management course!