Shaping Products for Agility

Despite widespread adoption of Agile frameworks, organisations are failing to be Agile. The causes are often self-inflicted. In this post we will focus on one common contributing factor – how products are structured.

Most organisations have adopted Agile and many of the groups involved in the value chain may be Agile teams. However, the outcome is often less than optimal with long and unpredictable lead times to turn ideas into value realised by the customer.

When we look at a value stream for an organisation’s product or service the picture revealed is usually a more complex version of this:

In addition to Agile teams, there usually other groups contributing, for example, Procurement, Finance, HR, Infrastructure, Operations, PMO, etc… Much additional and often unnecessary complexity can be introduced in the processes and interactions with these groups – the subject of future posts…

Looking through a product lens, we see “vertical slices” of component work that collectively realises the product as the customer understands it. For now, we’ll focus on just this component element of the puzzle.

Due to size and complexity, the product is split into parts that are treated as separate products, let’s call them “Partial Products”. Each partial product is assigned to a Product Owner who is responsible for building and maintaining an associated backlog:

This approach contributes much additional and unnecessary complexity.

From a Product Owner’s perspective, the limited scope reduces their view of the customer to the internal teams and stakeholders who are going to consume and build on the output from their team. The loss of sight and empathy with the end customer not only reduces the quality of the backlog content and prioritisation decisions, but also reduces motivation. Ensuring that all the related backlogs contain all necessary elements to satisfy the end customer is a constant challenge. The are always gaps and overlaps in coverage.

Trying to apply and optimise agility within the constraints of this product structure will have limited success. So, what do we do?

We shape our products to the value stream and widen the scope of our products, product ownership and team accountability – an End-to-End Product:

This is not a new idea – this is the essence of Agile!

  • It’s always worth reviewing the New New Product Development Game (Hirotaka Takeuchi and Ikujiro Nonaka – 1986), which informed thinking for Scrum.
  • LeSS (Large-Scale Scrum) has the concept of Feature Teams based around customer centric Product Backlogs.

This is all very well, but what happens when we have a large complex product that exceeds the cognitive capability and time capacity of a single Product Owner?

Descaling Agile

The first question we ask is “What is driving the complexity and how can we reduce it? Sometimes product complexity is higher than it needs to be due to incremental change over time and historic decisions. For example, our product may be a composite of different products, perhaps through acquisition or distributed over multiple departments and/or teams.

To align to value and reduce complexity we start with the customer and work backwards. Form the overall product shape from their needs. Then work out how it needs to be supported by Product Ownership.

Scaling Product Ownership

If we reduce to just the bare essential complexity but the product still exceeds the ability of a single Product Owner, then we scale to the “minimum viable number of Product Owners” by partitioning the product horizontally along the value stream:

The splits should result in realisable value to the end customer, albeit of reduced scope. Modelling product ownership this way enables backlog items that can be articulated as “outcomes” rather than design specification. This increases the empowerment of teams to innovate and is also much more motivational.

The dependency problem we see with the vertical partitioning approach is greatly reduced. Instead of “hard dependencies” between components, we now have lighter “relationships” and ordering considerations. These can be managed by the group of Product owners collaborating as a team on the collective superset of the backlog. The overall Product Backlog is best supported as a single backlog with subset views for each of the Product Owners’ horizontal partitions of the value stream.

Finally, match and optimise your team design to the ownership model. This last part may require revisiting the ownership model in the rare situations where the number of teams required exceeds the ability of a single person.


  1. Understand and reduce unnecessary complexity in the value stream
  2. Product structure is a common cause of additional complexity
  3. Descale – reduce complexity by aligning the product shape around customer needs
  4. Scale only to the “Minimum viable number of Product Owners”
  5. Partition the product into horizontal subsets of the value stream
  6. Product Owners collaborate as a team of Product Owners
  7. Manage the backlog as a single Product Backlog with partition subset views
  8. Team design follows Product (avoiding negative consequences of Conway’s Law)

Don’t be a “Best Practice Sheep”!

The idea of Best Practice is universal and seductive! I’m often asked for best practices for Scrum or how to scale Agile. We instinctively want to know the answer and have the recipe for success.
The message of this blog is, please don’t be a “Best Practice Sheep”!

So what is best practice? Here’s one definition:

Best practice comes in all shapes and sizes. We steal checklists, cheat sheets and workshop templates on LinkedIn. We copy the Spotify “Model” or implement a big framework like SAFe, and these are effectively large collections of practices.

And it is an attractive and compelling concept:

So what’s wrong with taking a shortcut with some proven practices?

Well, here are some points to ponder:

  • The success of a given practice is highly dependent on the context for which it is suitable. They are often Copy & Pasted without regard to the context and principles that informed the original derivation of the practice.
  • The larger and more comprehensive a practice or set of practices, the more context-specific it becomes. With increasing size, the probability of an unadapted practice being a suitable fit for your specific context decreases.
  • The assumption that a given practice is “Best” prevents us from improving the practice because, clearly, it’s already as good as it can get, and if you mess with it, you might break it!
  • Adopting the same practices as everyone else means that if you do it really well, you might just be as good as them – never better, though!
  • The journey is at least as valuable as the arrival. The engagement, empowerment and ownership that come from a group analysing and deriving their own answers to challenges is a large part of the magic of the practice that emerges.
  • Adopting best practices often seems to cut out critical thinking in exchange for passively following along a path setout by others who have done the thinking for you.
  • Practices are often a static snapshot in time. For example, the “Spotify model” is a generalised view of patterns used at Spotify circa 2012, as presented by Henrik Kniberg (who made it clear that the content of the talk would be out-of-date in 6 months).
  • Who says that a given practice is “Best”? Is there an official, unbiased body? Is it just very commonly shared, so we assume it must be best?

So what should we do instead?

  1. Start with your context and understand the problem that needs to be solved.
  2. Apply principles to guide the selection of potentially appropriate patterns and practices.
  3. Use small experiments to evolve and emerge your own context-specific practices.
  4. Repeat! Continuous Improvement should just be the way we always work!

This diagram shows a spectrum of more abstract principles through to rules in practices. The likely observed behaviours are also shown.


  • Don’t dumbly follow best practices!
  • The hard work of emerging our own practice is worth it!
  • Best Practices can be used as input but explore the underlying principles and adapt to fit your context.
  • Emerge your own practice, using small safe-to-fail experiments to validate/invalidate your assumptions and ideas.
  • Evolve through constant inspection and adaption – you are never done as your practice is never “Best”!

For a more detailed look at the challenges of Best practices and what to do with them, here a talk I gave on the topic.

Organisational WIP: too many projects hurt throughput

The performance of software delivery teams is so often undermined by team members being shared with other teams or regularly borrowed for operational support and other emergencies.

We know that small, collocated, cross functional teams provide optimum performance. So why is it so difficult to have a small group (7 +/- 2) dedicated and focused on one project at a time?

One major reason is that organisations have too much Work In Progress – WIP. Just count the number of in-flight projects and then count the number of teams available to deliver them. If there are more active projects than teams then there is too much WIP. Reducing the number of active projects results in a dramatic increase in the rate of value delivered to customers.

Let’s explore why spreading people across multiple projects is so detrimental:

Switching Cost

it takes time to get our head around a problem domain and then find the relevant bit of the system to work on. The more switching of business and technical domains, the more loss through spin up time. There is much research and writing on this topic, here are a couple of references:

In their excellent book, PeopleWare, Tom DeMarco and Tim Lister describe: “flow”, a state of deep concentration necessary for “high-momentum” tasks such as engineering, design, development, and other aspects of building software. It takes 15 minutes or more to achieve this state and only a moment to lose it. Every time a team member is interrupted, it costs the company fifteen minutes while they re-enter a productive flow of work.  SwitchingCostGraph

In another great book,
Quality Software Management: Systems Thinking by Gerald Weinberg, he estimates that there is a switching cost of around 20% for every project that a team member is shared across. The impact of switching cost on a team member’s available time is illustrated nicely by this graphic:


Solving complex problems requires close team work with swift collaboration. When a colleague switches to work with another team, any unfinished work they were involved with can block progress of fellow team members. In the worst case, the original team may have to wait for the team member to return before completing a highly specialist task. Over specialisation is another issue that drives organisations to share people across multiple teams but I’ll save that for a future blog.


It takes a focused effort to gain and retain a sufficiently deep understanding of a complex business and technical domain. Attempting to achieve this feat across multiple simultaneous projects, leads to a shallow veneer of knowledge and a poor basis for smart decisions and design.


Being shared across multiple projects erodes commitment by providing places to hide, and we hear this sort of thing: “Sorry I didn’t get my stuff done because I was working on the other project”.

Deployment Contention

In most organisations, fully live-like testing environments are rare and expensive resources that are vital to ensure the software is of sufficient quality to release to production. Like a crowd of people trying to get through a single small door, these environments become a bottleneck on a project’s journey to live deployment. The greater number of projects in-flight exacerbates this situation, leading to a queue of projects and delaying the delivery of value to customers.

Meeting Overhead

Very little software is actually developed in meetings so we try to minimise our meeting overhead to just enough and just in time. However, even with an Agile approach like Scrum it is necessary to spend some time in meetings; as much as 15.5 hours for one team member in a 2 week Sprint. The more teams a person is shared across, the less time they have to actually develop software – see the numbers below:

The numbers below are worked out assuming:

  • Scrum
  • Working time of 6 hours per day (minus breaks and non-project time)
  • 2 week Sprint
  • 60 hours available per person per Sprint

Meeting time per person per Sprint:
Sprint Planning 4 hours
Sprint Review 2 hours
Sprint Retrospective 1 hour
Backlog Refinement (10% max) 6 hours
Daily Stand-up 2.5 hours
Total 15.5 hours


Impact of meetings as numbers of projects increase


Of course, all these problems tend to be cumulative. So for example, the reduced time for actual work caused by multiple projects is further compounded by the switching cost, leading to very little productive working time.

Organisations that accept shared team members as an inescapable reality, must also accept the reality that their teams are never going to perform as well as they would like.

So the bottom line; reduce the number of simultaneous projects and focus team members to one team and one project at a time to increase the flow of value delivered to customers.