The agile police force

Chris Combe (he / him)

--

Agile Police Force (from Team America — World Police)

Editor note: Chris Matts pointed out that John Cuttler also has a blog by the same name, I only found this out after I authored the article and was not ready to rename the article. Check out John’s article here.

Complex adaptive systems

When large complex organizations invest in ‘Agile Transformations’, they look for results often more quickly and larger than is realistic. Words like Agile and other overloaded terms like velocity sound desirable — who doesn’t want to go faster?

That can have consequences on teams, the organization, and the agile coaches are often then ones expected to deliver the changes rather than the organization as a whole — quite often local optimization is only possible in the short term as tackling systemic impediments requires buy in from across the organization.

This is further constrained by a complex web of challenges that includes, new roles, organizing structures, metrics, tooling, terminology, and a significant number of agile coaches who in many cases are new to the organization and often full of hope and internally perceived as theoretical hand-wavey types living in the clouds.

“you do not understand our challenges,

we have different problems to other companies”

— incumbents

This is best illustrated by incumbents being sceptical to the changes and risk averse, so they ask for agile coaches for extremely detailed checklists, templates, role & responsibilities definitions that are now far more granular and constraining than how things worked before.

Ironically this approach ends up creating more learned helplessness rather than less as the so-called empowered teams are now dependant on templates and checklists to know how to do their job even though in many cases it is only marginally different to before.

As soon as things get difficult or off-piste then the templates and agile coaches are deemed ineffective.

Learning to learn

When executives think that agility only happens at the team level, there are limited opportunities outside of local optimization. A high paid executive who is nearer the end of their career is not looking to fundamentally change how they lead. So before you can learn how to learn, you must learn how to see (the system of work).

An organization needs to learn how to shift from focusing on the delivery of work to also focusing on how the system of work is performed and improved. This involves a lot of learning of ‘new’ practices for the organization which many of the individuals were aware of, but seldom have the discipline to put them into practices. Learning how to learn and improve is critical if organizations want to deliver customer value and keep up with the competition.

Sadly, many organizations expect “Agile” to solve their problems. The reality is that agile simply helps you find out the problems, it doesn’t tell you how the solve them. This is ironic, as Agile is often sold as a solution, executives expect it to solve things. This creates tension when an organization is looking for answers. They don’t find it helpful then they are told that they are empowered and need to think for themselves.

Focusing on the small

Often organizations end up with measurable improvements (changes) through the Hawthorne effect as any more meaningful outcomes require time and effort to reap the benefits. Agile coaches are desperate to demonstrate their value-add and the savvy amongst them are looking for small victories to help amplify a coaches impact in the short term when in reality things take longer and they are likely to happen in incremental steps, and at different rates throughout an organization.

This can also lead to agile coaches spoon feeding teams to quickly get them up and running but the teams aren’t learning the why or what to do when things stop working or new problems are uncovered via retrospectives.

I’ve seen this happen with other improvement programs such as a lean or simplification program where people optimize to meet targets, rather than establish a culture of continuous improvement. Programs have end dates, which puts a big damper on the continuous part.

The priority ends up being to get teams into a regular cadence, measuring four or so metrics and little context as to why they are doing them. The gap between a new scrum master and a seasoned agile coach is significant and giving people a little knowledge can be more harmful than useful in the medium to long term.

People try things until they find edge cases where the practices do not work and either revert or try to do things without any support and end up tying themselves in a knot as they do not have an experienced practitioner by their side to help them through the learnings.

This is foundation building, rather than a sustained rhythm of learning and improvement. Teams, especially those with existing systems / applications, are in dire need of refactoring / rearchitecting and rarely supported by stakeholders to invest the time and effort into improving their ability to build, test, and deliver.

One metric / dashboard to rule them all

One approach people take is deploying standard metrics across an organization used in a consistent way, this addresses enterprise concerns such as data sourcing, common approach to calculations and a ‘consistent look and feel’.

When these metrics / dashboards are surfaced organization wide, leadership cannot help but compare their teams vs their peers. A common anti-pattern is velocity — why would you not want to keep increasing that — what could go wrong. How is that any different to cracking a whip?

This can be destructive and there is a whole host of topics wrong with ‘vanity metrics’, especially if leadership compares teams across the company rather than within their own organization and context, this leads to shaming rather than supporting.

This can be further complicated when the agile coaches in the organization become the enforcement of the metrics e.g., team A is not performing — the coaches will be set to go and fix the team's performance. This absolves leadership of any opportunity to understand the system work and the constraints the teams may be facing.

When it comes to metrics and how the metrics are presented, be careful those metrics do not turn into a beauty pageant for teams who are not suffering the same impediments as others. I know countless teams at the mercy of 1990s era architecture styles that have constrained the teams so much that they don’t know how to engineer their way out.

I suggest you limit the views to a specific persona / context to enable useful, rather than harmful views. Some metrics are useful for a team, others are more useful to a team of teams or above. When metrics are made visible to higher levels, they should be summarized to protect the innocent.

I’ve seen this happen with the infamous DORA metrics, it was too hard to measure things due to tooling data model gaps, so they measured a proxy based on a JIRA attribute, this led to teams creating fake JIRA releases to look good as their management set explicit performance targets.

Think in terms of relative improvement, this way we are encouraging improvements to the system of work, rather than a race to an artificial target. A target most teams were never going to hit given their technical debt, technology stack and a whole host of other variables.

Another thing to watch out for is the cadence of the business and products in question, in parts of an organization where they have business models that benefit from time to market, teams will face far less resistance than in parts of the organization where people are more conservative, risk averse and focused on longer term sustainable revenue (e.g. a bank with 30+ year mortgages) or controlling functions (e.g. your compliance department).

It is quite likely the teams that are already doing well, have already overcome their own impediments over years rather than anything that was recently done to a centralized effort.

E.g. automating tests takes time and building up stakeholder trust, if you want to speed up your time to market you must increase quality and that quality needs to be repeatable — that is where a healthy amount of automation and testing can used to demonstrate that things are improving and they now need far less manual inspection as quality is being built into the work rather than after the work.

Justifying existence

If you are an agile coach and new to a team or the organization, it is incredibly unlikely that a team will feel comfortable enough to share actual issues they are facing if they think you will run off to management. Transparency and honesty require trust, which takes time. Turning up to a team’s next meeting with a set of red metrics is only going to cause the team to close off.

Things get complicated through incentive alignment, or how agile coaches prove value to the organization. There is a real chance where the agile coach is more interested in outputs of teams (or worse) the agile coach themselves.

This perverse incentive usually ends up challenging integrity. They need to decide between supporting and nurturing the teams vs. focusing on reportable. Things such as events held / running, training completed, maturity assessments performed or changes in metrics like velocity.

So now what

One of the constraints we all need to be aware of is the top leadership patience and expectation of a return of investment. This is particularly tricky when leadership is not used to thinking in terms of outcomes and are looking for measurable outputs or big changes. This is an impedance mismatch — we are looking to create sustainable change and continual improvement over time. That is harder to demonstrate especially when things often get worse before they get better.

I’ve seen this happen too often for it to be a myth. I believe this is the case because there is so much inefficiency in the system of work that once things change, the perceived productivity drops — when teams are trying to focus on getting some work done, rather than doing all of the work but never completing anything. People tend to confuse this ‘being busy’ with being productive, doing lots of work and not finishing it quickly is not a good measure.

An agile coach has a small window to help the team move out of the trough of despair and back up to something more sustainable. The challenge is managing the expectations of stakeholders who aren’t comfortable hearing that a team may not do work they are expecting to happen to have delivered.

Putting expectations into context with leadership is critical to enabling sustainable pace of improvements, especially when quite often things get worse before they get better — we see this in the Kubler-Ross model J curve / Satir change model / Diffusion of Innovation etc.

This is further worsened by language such as ‘fail fast’. If your organization is risk averse this is the last thing you should be doing if you are looking to create organizational awareness of validated learning. Having language that works in the organization is far more effective than what is in the textbooks. This is incredibly important if you want things to embed, beware of people who already know the terms thinking that you are washing the integrity of the terminology away. Real practitioners will not have a problem adapting to company specific technology.

Wrapping things up

Investing in metrics, dashboards and visualization is essential if you want to measure improvements to ways of working, however what you measure, how you measure and crucially how you visualize is even more important from a behavioural perspective.

Teams won’t trust agile coaches if the data is being used to ‘snitch’, if teams suspect the data will be used for enforcement or punishment, they will gamify things to get management off their back.

Having useful incentive systems in place can really help here e.g., using relative improvement metrics for comparative views rather than absolute, not just as a team level but also organizational.

You can make improvements fun like an awards ceremony rather than having explicit absolute targets. e.g., most improved over time, most sustained performance, most helpful teams (a team who helps other teams).

If you treat improvement of work as important as the work itself, invest in regular retrospection driven by data, people can discuss problems based on the data rather than just their opinions.

Consider keeping the number of agile coaches you introduce to a low number but with high degrees of expertise, instead have a small team of experts that focus on capability building ( an agile coach production line) that has a clear set of exit criteria and a few named individuals who will remain if they choose so that everyone has a clear view on how things should work when the ‘transformation’ dollars are spent. This will also ensure that your central group of experts are not expected to always be there for a team, and they know that they need to learn themselves over time.

There are no silver bullets, we are not hunting werewolves.

--

--

No responses yet