All posts by Colin Bird

Shaping Products for Agility

Despite widespread adoption of Agile frameworks, organisations are failing to be Agile. The causes are often self-inflicted. In this post we will focus on one common contributing factor – how products are structured.

Most organisations have adopted Agile and many of the groups involved in the value chain may be Agile teams. However, the outcome is often less than optimal with long and unpredictable lead times to turn ideas into value realised by the customer.

When we look at a value stream for an organisation’s product or service the picture revealed is usually a more complex version of this:

In addition to Agile teams, there usually other groups contributing, for example, Procurement, Finance, HR, Infrastructure, Operations, PMO, etc… Much additional and often unnecessary complexity can be introduced in the processes and interactions with these groups – the subject of future posts…

Looking through a product lens, we see “vertical slices” of component work that collectively realises the product as the customer understands it. For now, we’ll focus on just this component element of the puzzle.

Due to size and complexity, the product is split into parts that are treated as separate products, let’s call them “Partial Products”. Each partial product is assigned to a Product Owner who is responsible for building and maintaining an associated backlog:

This approach contributes much additional and unnecessary complexity.

From a Product Owner’s perspective, the limited scope reduces their view of the customer to the internal teams and stakeholders who are going to consume and build on the output from their team. The loss of sight and empathy with the end customer not only reduces the quality of the backlog content and prioritisation decisions, but also reduces motivation. Ensuring that all the related backlogs contain all necessary elements to satisfy the end customer is a constant challenge. The are always gaps and overlaps in coverage.

Trying to apply and optimise agility within the constraints of this product structure will have limited success. So, what do we do?

We shape our products to the value stream and widen the scope of our products, product ownership and team accountability – an End-to-End Product:

This is not a new idea – this is the essence of Agile!

  • It’s always worth reviewing the New New Product Development Game (Hirotaka Takeuchi and Ikujiro Nonaka – 1986), which informed thinking for Scrum.
  • LeSS (Large-Scale Scrum) has the concept of Feature Teams based around customer centric Product Backlogs.

This is all very well, but what happens when we have a large complex product that exceeds the cognitive capability and time capacity of a single Product Owner?

Descaling Agile

The first question we ask is “What is driving the complexity and how can we reduce it? Sometimes product complexity is higher than it needs to be due to incremental change over time and historic decisions. For example, our product may be a composite of different products, perhaps through acquisition or distributed over multiple departments and/or teams.

To align to value and reduce complexity we start with the customer and work backwards. Form the overall product shape from their needs. Then work out how it needs to be supported by Product Ownership.

Scaling Product Ownership

If we reduce to just the bare essential complexity but the product still exceeds the ability of a single Product Owner, then we scale to the “minimum viable number of Product Owners” by partitioning the product horizontally along the value stream:

The splits should result in realisable value to the end customer, albeit of reduced scope. Modelling product ownership this way enables backlog items that can be articulated as “outcomes” rather than design specification. This increases the empowerment of teams to innovate and is also much more motivational.

The dependency problem we see with the vertical partitioning approach is greatly reduced. Instead of “hard dependencies” between components, we now have lighter “relationships” and ordering considerations. These can be managed by the group of Product owners collaborating as a team on the collective superset of the backlog. The overall Product Backlog is best supported as a single backlog with subset views for each of the Product Owners’ horizontal partitions of the value stream.

Finally, match and optimise your team design to the ownership model. This last part may require revisiting the ownership model in the rare situations where the number of teams required exceeds the ability of a single person.


  1. Understand and reduce unnecessary complexity in the value stream
  2. Product structure is a common cause of additional complexity
  3. Descale – reduce complexity by aligning the product shape around customer needs
  4. Scale only to the “Minimum viable number of Product Owners”
  5. Partition the product into horizontal subsets of the value stream
  6. Product Owners collaborate as a team of Product Owners
  7. Manage the backlog as a single Product Backlog with partition subset views
  8. Team design follows Product (avoiding negative consequences of Conway’s Law)

Agile Failed or Failed to be Agile?

Reading some posts on LinkedIn this morning got me thinking about this whole “Agile is dead” theme that seems to be in vogue.

The Agile Manifesto opens with “We are uncovering better ways…”. It was never intended to a be a static set of concepts carved in stone! The idea that “it’s time to replace Agile”, is like saying that “we tried to be better but failed so let’s blame it on Agile and do something else instead…”.

Like many ideas, Agile means different things to different people so we are not even arguing about the same thing. To some, Agile is where you list all the requirements in a backlog and then plan and execute work in 2-week Sprints. To others, it is a more abstract concept that guides how to navigate complex challenges by continually learning and adapting towards better outcomes.

This situation shouldn’t surprise us. The Agile Manifesto was born out of the practical experiences of thought leaders experimenting with new ways to approach software development challenges back in 2001. The Manifesto is very specifically about software and hasn’t changed since it was originally penned.

At its heart, when you strip away “software” the manifesto contains ideas that prove to be generally applicable to groups of humans taking on complex challenges. Here’s a summary of what I see in the manifesto:

  • Focus on customer collaboration and value.
  • Optimise feedback loops and learning to incorporate change as a natural part of solving complex problems.
  • Highly autonomous teams of motivated, empowered, collaborative and diverse cross-functional people are the basis for delivering value.
  • Its not about the process, its about the people – look after them!
  • Regularly inspect and adapt environment, processes and product to sustainably increase effectiveness and reduce product feature waste.

Observing organisations trying to survive and thrive in today’s challenging VUCA world, I would argue that all the 5 bullets above are highly relevant. If we look a little closer, we will find that many of those organisations will claim to have adopted Agile but are in fact a great distance from those bullets. So, is it Agile that’s failed? More likely, we are failing to uncover better ways…

From the lovely and much missed Jean Tabaka circa 2009 – “We should focus on organisational change, not lean or agile”.

Thanks Karl Scotland for drawing my attention to this statement in his post:

The key challenge isn’t with the ideas of Agile, its how we effect organisational change to discover improved ways of working. The tripwire is the change or transformation process – this is what typically fails! Organisational models are designed for stability and predictability, building in change resistance. Leaders have too much to lose and change threatens safety and job roles.

Organisational change is a complex endeavour. Sadly, it is often approached in a plan and top-down driven way with a start and end point. Ironically, what we need is to be Agile! An iterative and incremental change process that enables safe-to-fail experiments to emerge better practices. And, of course, the courage to make real changes to the environment, structures, roles, policies, processes, budgeting, reward and recognition systems, etc…

So, in summary, if we buy into the understanding of Agile as direction of travel to uncover better ways of working. Then the bit we need to fix, is how we change our ways of working.

Scaling Agility? Time for something ELSE!

Something ELSE for Scaling Agility

It’s becoming increasingly clear that a Copy & Paste of the “Spotify model” or rolling out SAFe has a vanishingly small chance of realising the untapped agility of your organisation.

Scaling agility requires fundamental changes to an organisation as it is so much more than bolting in a new process and set of “best” practices. Badly applied scale frameworks often make the situation worse, rarely resulting in sustained and positive change, with more roles and processes dragging progress and old habits reasserting themselves.

It’s not that the contents of scaling frameworks are necessarily bad, but we need to recognise that practices are context-specific, and that meaningful and sustaining change requires an emergent journey rather than a transactional switch from old to new ways of working.

It’s time for another approach, it’s time for something ELSE!

ELSE (Emergent Large-Scale Evolution) – is a continuous improvement approach that fosters emergent practices, guided by actionable principles, to enhance organisational agility at scale.

A free resource that aims to increase the probability and level of success for organisational agility and large-scale products. Although it offers an alternative approach to those offered by Scaling frameworks, ELSE is not incompatible with many of the patterns and ideas contained within them.

It acts as a guide to help analyse and understand the current context and set your organisation on a journey of continuous improvement and fostering of emergent practices.

  • Engaging and empowering people in a generative change process that emerges context-specific and appropriate practices.
  • Supporting your organisation with the challenge of enacting and sustaining change.
  • Providing a set of Scaling Principles at an “actionable” level of detail to support a critical inspection process to understand improvement opportunities.
  • Acting as a guide for the direction of improvement.
Emergent Large-Scale Evolution

Want to know more?

The ELSE MVP was made publicly available at OOP 2024 in Munich and will be in a state of continuous evolution helped by your feedback!
The Perspective sections describe the underlying concepts and approaches in some detail. So, for example, if you want to understand:

  • the change approach at the heart of ELSE, have a read through the Change Perspective
  • how leaders catalyse and support the evolution of an environment that engages, empowers and unleashes the human potential of an organisation then absorb the Leadership Perspective
  • how to shape product and product ownership at scale, check out the Product Perspective
  • how to structure and support highly effective autonomous Agile teams then review the Teams’ Perspective

The Principles provide a direction of travel and a way to view and understand your current context. The principles are more abstract than practices and should therefore be more universally applicable. However, they are pitched at what we term, an “Actionable” level of detail with the intention that they should be concrete enough to be practical. For a deeper discussion of why we focused on Actionable Principles as opposed to practices that we find in frameworks go here.

The current form of ELSE has been created through a collaboration of five diverse Agile coaches and trainers with plenty of battle scars: Pierluigi Pugliese, Colin Bird, Matt Roadnight, Simon Roberts and Jan B. Olsen.

Don’t be a “Best Practice Sheep”!

The idea of Best Practice is universal and seductive! I’m often asked for best practices for Scrum or how to scale Agile. We instinctively want to know the answer and have the recipe for success.
The message of this blog is, please don’t be a “Best Practice Sheep”!

So what is best practice? Here’s one definition:

Best practice comes in all shapes and sizes. We steal checklists, cheat sheets and workshop templates on LinkedIn. We copy the Spotify “Model” or implement a big framework like SAFe, and these are effectively large collections of practices.

And it is an attractive and compelling concept:

So what’s wrong with taking a shortcut with some proven practices?

Well, here are some points to ponder:

  • The success of a given practice is highly dependent on the context for which it is suitable. They are often Copy & Pasted without regard to the context and principles that informed the original derivation of the practice.
  • The larger and more comprehensive a practice or set of practices, the more context-specific it becomes. With increasing size, the probability of an unadapted practice being a suitable fit for your specific context decreases.
  • The assumption that a given practice is “Best” prevents us from improving the practice because, clearly, it’s already as good as it can get, and if you mess with it, you might break it!
  • Adopting the same practices as everyone else means that if you do it really well, you might just be as good as them – never better, though!
  • The journey is at least as valuable as the arrival. The engagement, empowerment and ownership that come from a group analysing and deriving their own answers to challenges is a large part of the magic of the practice that emerges.
  • Adopting best practices often seems to cut out critical thinking in exchange for passively following along a path setout by others who have done the thinking for you.
  • Practices are often a static snapshot in time. For example, the “Spotify model” is a generalised view of patterns used at Spotify circa 2012, as presented by Henrik Kniberg (who made it clear that the content of the talk would be out-of-date in 6 months).
  • Who says that a given practice is “Best”? Is there an official, unbiased body? Is it just very commonly shared, so we assume it must be best?

So what should we do instead?

  1. Start with your context and understand the problem that needs to be solved.
  2. Apply principles to guide the selection of potentially appropriate patterns and practices.
  3. Use small experiments to evolve and emerge your own context-specific practices.
  4. Repeat! Continuous Improvement should just be the way we always work!

This diagram shows a spectrum of more abstract principles through to rules in practices. The likely observed behaviours are also shown.


  • Don’t dumbly follow best practices!
  • The hard work of emerging our own practice is worth it!
  • Best Practices can be used as input but explore the underlying principles and adapt to fit your context.
  • Emerge your own practice, using small safe-to-fail experiments to validate/invalidate your assumptions and ideas.
  • Evolve through constant inspection and adaption – you are never done as your practice is never “Best”!

For a more detailed look at the challenges of Best practices and what to do with them, here a talk I gave on the topic.

Bounded Autonomy: Supporting Agile Team Autonomy With Boundaries

The Need for Autonomy

“In the long history of humankind (and animal kind, too) that those who learned to collaborate and improvise most effectively have prevailed.” – Charles Darwin

The evolutionary pressure in the world has never been more volatile and uncertain. Surviving and thriving requires organisations to respond rapidly, adapt and navigate complex challenges. An Agile organisation nurtures and supports Agile teams that can rapidly and effectively engage with complex challenges.

Autonomous Agile Team

There are a number of factors contributing to the effectiveness of an Agile team. One of the core factors is the need for a high degree of Autonomy so that they can respond rapidly, operating with minimum dependencies on external decision-makers and other teams and services.

In addition to a “Process efficiency” gain, team autonomy is also a major motivational factor, increasing team member engagement.

Here is an excerpt from Self-Determination Theory:

“Conditions supporting the individual’s experience of autonomy, competence, and relatedness are argued to foster the most volitional and high quality forms of motivation and engagement for activities, including enhanced performance, persistence, and creativity.” –

In summary, Agile teams are highly effective when they are empowered and can function with significant autonomy and clarity of purpose.

Bounding Autonomy Without Breaking It

A question that many leaders ask: “How do we direct Agile teams without telling them what to do and destroying their autonomy and empowerment?”

The answer is to ensure that there are explicitly agreed boundaries around the teams that make clear what they can, and what they can not, decide and act on independently. Team autonomy should be “Bounded”.

Careful design and co-creation of team boundaries enable:

  • Explicitly agreed boundaries – teams won’t have to guess what decision authority they have.
  • Focus – the team have clarity of purpose and priorities guided by their Product Owner.
  • Constraints – the team understand the technology, design and compliance standards within which they need to work.
  • Emergence – boundaries should evolve when teams find that they limit value creation.

Agile Team Control Surfaces

The diagram below illustrates an array of “Control Surfaces” that serve as mechanisms to bound interactions with the team and actions by the team:

Team Focus

To influence what the team works on, stakeholders collaborate with the Product Owner, offering their perspectives on backlog candidates and priorities. Stakeholder input will assist the Product Owner in forming the Product Goal and influence the Sprint Goals selected by the team, providing the alignment “Focus” around the mission. In the words of Stephen Bungay, “The more alignment you have around direction, the more autonomy you can get around actions” – The Art of Action by Stephen Bungay. No one outside the team should override the direction guided by their Product Owner or directly task team members.

Team Constraints

The team need to be free to decide how to meet the outcome described by Product Backlog items and their Acceptance Criteria. However, constraints can be applied via the Definition of Done (DoD) and other agreements that cover standards and compliance aspects. These agreements should be collaboratively formed with the team and other relevant stakeholders and should be viewed as living, evolving artefacts that the team can challenge.

Boundary Permeability

The “Semi-permeable” boundary around the team indicates the protection the team should have from disruption and interference in their mission to achieve the agreed goals. It should, however, not exclude the team from interacting directly with sources of information external to the team.

Role of Leadership

Leadership are considered as “Stakeholders” when it comes to influencing what the team focus on. They also have an essential role in supporting the evolution of the environment around the team to ensure that the team can maximise their value creation for customers and stakeholders. Where the team are hindered by issues such as unnecessary or overly heavy bureaucracy, leadership should work to understand and evolve the organisational systems and structures.


The boundary delineates who is in the team and assumes that the membership is largely stable and committed to accomplishing the shared team mission – what Richard Hackman refers to as a “Real Team” – Leading Teams by J. Richard Hackman.

Teams must be cross-functional – they have all the skills required to autonomously turn backlog items from ideas to outcomes. In other words, the team is not dependent on external capability to deliver items from their backlog.

A further precondition is that backlog items themselves are autonomously valuable and articulated in business domain outcome form and not as technical or component level work.

Protecting Autonomy at Scale

Scale Challenges to Autonomy

In scaled situations, where there can be multiple teams contributing to a large and complex outcome, there is often a loss of team autonomy. There are a number of potential causes, including:

  • Top-down imposition of bureaucracy and additional process controls.
  • Additional team coordination functions external to the teams.
  • Lack of clarity about what is within a given team’s decision-making authority.
  • Poor Value Stream/Product and Team design creating interdependent and overlapping work.
  • Over-specialisation of skills leading to teams that are not fully cross-functional.
  • Design decisions imposed by functions external to the teams reducing team empowerment and increasing dependency on external agents. E.g. Architectural, User Experience, Security, Infrastructure, etc…

Team Structure

Ideally, the design of the team of teams structure should be optimised to provide an environment that will support greater team autonomy. Look to incrementally improve the following factors to foster the conditions for greater potential team autonomy:

  1. Design the Product and team structures so that they align closely with the business and customer Value Stream.
  2. Optimise the team structures and skills mixture to minimise interdependencies with other teams whilst staying true to number 1.
  3. Avoid external subject matter experts injecting design decisions into teams. This requires that the team membership includes the relevant competencies and has the authority to emerge design from within the team.

Applying Boundaries

Building on the evolving improvements to the structural team model, work with the teams and relevant stakeholders to create boundaries to support autonomous team behaviour at scale:

  1. Create explicitly agreed boundaries around each team so that they understand what they can decide and execute autonomously, versus what they need to escalate and discuss externally. Boundaries should be set and evolved collaboratively with the teams rather than imposed on them.
  2. Use shared “Enabling Constraints” to encourage teams to collaborate enough that their autonomous design and output integrate to support the outcomes of the Value Stream. A combined integrated Sprint Review on a shared environment at the end of each Sprint is a great example.
  3. Ensure that the teams work within a shared context of the end-to-end Value Stream so that they understand the bigger picture of the overall mission.
  4. Continuously evolve all of the above!

Shared Boundaries

Consider a number of teams contributing to the same Value Stream. Each team will need a clear set of boundaries within which they can be autonomous. These team boundaries may also need to share some common elements to support consistency for decisions that have a Value Stream-wide impact. For example, there may need to be common interface or user experience standards so that the combined work has cohesion and user interaction consistency.

Within a Value Stream

The diagram below illustrates this relationship where the effective team boundaries include common Value Stream level boundary elements. Each team should also be able to exert influence on the Value Stream boundaries, for example, identifying new elements or those that need to be updated and removing excess restrictions and bureaucracy.

All Value Stream teams sharing a common set of core Definition of Done elements is a good example of a shared boundary that will support inter-team collaboration and consistent quality of the integrated whole.

Across an Organisation

The DoD is also an example of where an explicit boundary, may be, and usually is, influenced by organisation-wide quality standards.

The model can be thought of as a “Russian Doll”, with the effective boundary from a team’s perspective being an amalgam of the organisational + Value Stream + team-specific boundaries. As per the diagram below:


  • How teams are designed is a key determining factor in enabling their autonomy.
  • Team design should be based on a Product structure that is aligned with Value Streams.
  • Team design should be optimised to reduce interdependencies between teams, external experts and external services.
  • The right boundary constraints, explicitly articulated, and collaboratively set and evolved, will establish the conditions for autonomous and empowered teams.
  • Scaled teams should collaborate and agree on the boundary constraints that will be necessary to support clear autonomy at a team level and understand why some constraints need to be set and evolved at a Value Stream level.
  • Value stream and team boundaries have to consider what constraints need to be included from the wider organisation. 
  • Although boundary constraints define the limit of team and Value Stream autonomy, they are open to challenge whenever they appear to impede value flow.
  • Leaders play a crucial role in supporting the continuous evolution of organisational environments to optimise and sustain value creation.

COVID as Forcing Function

What is a Forcing Function?

Forcing Function: In the Agile space, a forcing function is a constraint that causes a change in behaviour as a team adapts to accommodate the constraint.

A Scrum Sprint is good example of a forcing function – the team has to figure out how to complete work in a tight timebox. If they work in a sequential waterfall approach: analysis – design – development – testing, then chances are, they will fall out the back of the Sprint with only partially completed work. The Sprint boundary forces them to reconsider how they shape work and collaborate to get it done, thereby driving behaviour change.

Face-to-Face Communication

In the Agile world we have always favoured face-to-face over other forms of communication and this is enshrined in the 6th Agile Principle:

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

When we design teams, we aim to co-locate team members so that we maximise their ability to directly communicate and collaborate. If you’d asked me a few months ago, I would have told you that a co-located team will always outperform a distributed team.


And then along comes the COVID lockdown! Organisations were forced to send their people home and fully distributed teams became non-optional. A new and far reaching forcing function had arrived!

COVID as a Forcing Function

Given the enforced distribution of their workforce, organisations were forced to rapidly confront a number of infrastructural and people challenges. Our experience over the past months is that many have adapted well and some teams (but not all) have begun to perform better in distributed form than when they were co-located.

At its heart, agility is about adapting. Those teams and organisations faring better have adapted, and continuing to adapt, their behaviours, policies and infrastructure to surmount the challenges. I think many of the adaptions may well be retained and incorporated into the new ‘normal’, whatever that might be…

Digital Infrastructure

Some teams and organisations bumped into limitations in their infrastructure and policies that dramatically reduced the ability of their people to productively contribute. Examples include:

  • Severe and draconian security policies enforcing tight security, but preventing workers from actually working.
  • Mandatory use of virtual desktops and VPNs where the performance is so poor that video conferences and use of online collaboration tools is impractical.
  • Blanket ban on use of cloud based collaboration tools that sit outside of the corporate IT infrastructure.
  • Overly rigid rules usually result in people finding ways to circumvent them (e.g. using their own PC) and potentially less secure.

Rather than blanket bans and rigid rules, security has to be contextually flexible and an enabler for a distributed workforce to work effectively whilst reducing risks to an acceptable level for the sensitivity of specific types of information.

Not all broadband connections are equal! Even those living in more populous areas, have really struggled along as 2nd class citizens, repeatedly having to reconnect or unable to support video and audio.

I’m hoping that in the UK, Boris’s “Build Build Build” slogan is going to have our national digital infrastructure at its heart! Meanwhile, organisations might considering investing in business broadband connections for workers that are regularly facilitating or leading large online workshops and are currently suffering poor quality consumer Internet connectivity. Consider the real cost of a disrupted meeting versus the investment cost of better infrastructure.


The home office needs to be equipped to support extended digital collaboration. Siting on the sofa with a tablet isn’t going to cut it! The home worker must have reliable WiFi, suitable laptop, screen, keyboard, video camera, microphone, seating/standing desks, etc.

IT strategy and services need to redraw the boundary and recognise their workers home infrastructure as an extension of the corporate IT landscape.



Humans are by and large social animals. Our social interactions in a coffee break or grabbing a sandwich at lunchtime are an important part of the relationship building that collaboration and trust are built upon. They also play a key role in psychological wellbeing.

Extra attention needs to be put into non-work social interaction. Regular virtual coffee breaks with random colleagues are one approach – we’ve used an add-in to Slack called Donut for this. A bit of banter at the beginning of a meeting helps get people relaxed and talking, as well as providing a bit of an interaction fix. Having video on also humanises interactions as well as adding richness and effectiveness to verbal communication – seeing the Programme Manager’s Star Wars models on a shelf behind their head really helps break down barriers!

Motivation, focus and sense of purpose can be harder for some, removed from the buzz and life of the office environment.

Additional support may be required, adding structure to people’s days with regular touchpoints with colleagues and leaders, coaching to break down larger missions into short term tangible goals and ‘micro-plans’.


Teams that had already formed before COVID have generally been able to adapt to the distributed world, taking their existing relationships and team identity into the online world. Where a new team has been formed from scratch there have been far greater challenges to building the team identity, cohesion and trust.

Additional thought and time should be put into online team building activities to accelerate the development of the interpersonal relationships. Systematic pairing or even working in threes will also fast-track the sense of team as well as increase knowledge sharing, common purpose, work quality and collaboration.


Zoomed out! Whilst Microsoft Teams and Zoom are essential collaboration tools for the distributed workforce, they can also become a prison. Back-to-back conference calls are draining and when there’s no time for a toilet break or lunch then they create an unstainable pressure as well as reduce the time for focused work. Another side effect, is that meeting start times are delayed waiting for everyone to escape from their last call, reducing the effectiveness of the meeting and leading to it running over time.

Schedule meetings with gaps and reduce meeting duration. Rethink how meetings are run, look for fresh alternatives to watching a slide deck presentation that really get participants engaged. We use virtual whiteboard tools as platforms to create interactive meetings with far greater collaboration and enjoyment. Check out: and

The whiteboard artefacts transcend the scheduled meeting period so collaboration can take place before and after the meeting, as well as in the session.

Shameless plug… whilst we’re talking about tools, have a look at our: for online Lean Coffee format meetings.


COVID has forced us to rethink the way we work. The result is that we have proved remote working and distributed teams are viable and, with some adaptions and effort, made to work well. I suspect that a co-located team will still generally out perform a distributed team (especially if we include team formation). However that’s just one dimension and we should also factor in the advantages:

  • Potential for a better life-work balance
  • Reduced corporate office costs
  • Reduced travel time
  • Positive environmental impact
  • Removal of geographical barriers to team membership

There are number of tools and techniques that I’ve used through this period that I will continue to use even when/if I get to work with co-located groups again.

Scaling Agile Teams: Feature versus Component Teams

Agile experts will tell you the first rule about scaling Agile is … Don’t!

The ideal Agile team model is a single co-located, cross-functional team working closely with the customer. As soon as you involve multiple teams to build the same product there will be challenges and a reduction in efficiency (the amount of value created for a given investment). However, if timelines and product scale are beyond a single team then we consider scaling to a “Team of Teams”.

We attempt to structure the teams and work for maximum effectiveness by staying true to Agile principles and adopting a set of scaling patterns.

There are many potential patterns, however there is usually a fundamental choice between the Component or Feature team pattern as a basis to structure the teams.

This article will consider the pros and cons of these two approaches.

Lets start with some brief definitions of the two patterns:


Component teams are structured to match technical layers, components or services. Examples include; iOS team, Payment API, Data Services, etc. Customer and/or Business valuable backlog items are split across the teams, each contributing functionality in the components or services for which they are responsible.



Feature teams are structured to match business value so that they take end-to-end stripes of customer valuable functionality and deliver them within the boundary of a single team. Unlike the Component teams, Feature teams work across all components related to providing the business functionality.


In our coaching work, we find that most organisations tend to favour the Component pattern as the primary model for team design for reasons that include the following :

  • Specialised roles and expertise are easier to fit into specialist component focused teams and it would seem logical that people with the deepest knowledge in a given component will do the best job!
  • Related to the first point – departmental boundaries are usually easier to accommodate with component teams.
  • Clear boundaries of responsibilities for design, build, test and bug fixing.
  • Clear and well understood inter-team interfaces/hand-offs that match architectural platforms and layers.
  • Teams are somewhat decoupled so that they can work independently without breaking and interfering with each other’s work.

So, on face-value the Component pattern looks like a smart choice! Before rushing to a conclusion however, I’d recommend considering some of the challenges generally experienced with the Component approach.

Challenges of the Component Pattern

Long Lead Times
Record the time from when work on a business value feature starts through to the time it is potentially shippable. The collective delays and challenges of the points below all accumulate to produce average lead times that are often multiple Sprints long.


Additional Processes and Roles
The Component pattern looks neat on paper but in practice it requires many additional processes. These include:

  • More upfront work on architectural design so that business valuable backlog items can be chopped up into their technical components and handed out to the teams. Technical Product Owners are often added to the mix to run this process per team.
  • Once a team finishes a component it will need to be integrated and tested when the dependent components are complete. Integration and test teams are common additions.

Early Commitment
In order to design the component team structure (i.e. identify the components required) and to prepare the technical backlog items, architectural assumptions need to be committed to early. Apart from reducing the teams’ empowerment, it can also lead to later rework when assumptions turn out to be incorrect.

Dependency Challenges
Inter-component dependencies can block teams’ work so management and prioritisation of the technical backlogs to optimise the overall flow of high value work becomes a real challenge and overhead.

Invisible Technical Debt
Due to the focus of teams’ responsibility on their components, the Definition of Done is scoped to individual team output. In other words, we have a “Team Done” and then a “Done Done” when all components have been integrated and fully tested sometime later. The result of this hobbled DoD is that an invisible amount of debt is being carried, adding significant risk to the project and making forecasting metrics almost worthless (Burnup and Burndown charts are predicated on Done meaning potentially shippable).

Delayed Feedback Loops
The decoupled “advantage” of component teams can lead to delayed integration, feedback and learning. Later discovery of issues offers the choice of often expensive rework or acceptance of lower quality.

  • Problematic integration with extended effort and time to rectify, often with less than optimal joins between components.
  • Late discovery of load & performance challenges leading to delayed releases and/or poor production performance.
  • Expensive rework and potential for higher cost of ownership of final solution once in production.

Sub-optimised Design
One of the disappointments of the component pattern is that it rarely delivers the quality of architecture expected from a model where the best qualified people are working on their areas of expertise.  There are a number of contributing factors that lead to this outcome:

  • Loss of business context of the overall objective as teams are too removed from the business goals, working against decomposed technical backlog items.
  • Technically sub-optimised decisions within architectural layer that seem to make sense except when viewed from the perspective of consuming functionality or the overall system.
  • Conway’s Law: Solution architecture reflects the team structure rather than the business services it is meant to deliver


Uneven Workloads
One team can become overloaded whilst another may be scratching around for work. A secondary problem of this scenario is that teams are not necessarily always working on the most valuable items, rather they work on items that aren’t blocked by a dependency and are within the scope of their service or layer.

Finger Pointing
When bugs emerge, they often spread across team responsibility boundaries, leading to finger pointing as to which team is at fault and who should resolve the issue.


So as you may have gathered from the above points, I’m not the biggest fan of the component pattern and this if from many years experience, starting with my first attempts to scale Scrum teams back in 2005. I thought it looked like a logical approach at the time Smile

So, does that make the pattern bad? No, not necessarily! All patterns have to be considered in a given context and then if they look promising, try them and then regularly inspect and adapt. There are also many sub-patterns that we can apply to mitigate some of the issues discussed above.

I generally find the Feature team model more appropriate in most situations. This alternative pattern helps to resolve or reduce many of the challenges experienced with the Component approach, particularly:

  • Reduced lead times for producing working business aligned features, with faster feedback loops and reduction in process and role requirement.
  • Reduced dependencies and management overhead.
  • Increased team empowerment and architectural flexibility with later commitment.
  • Reduced excuses for a build-up of technical debt with honest metrics showing realistic progress, or lack of.
  • Easier to optimise teams’ workloads and focus on the highest value backlog items.
  • Issues and bugs have clearer responsibility with the team that was responsible for implementation.

However, the Feature pattern is no magic bullet and has its own set of challenges, and as for the Component approach, sub-patterns may need to be applied.


As this blog is already way longer than planned, I’ll explore some of these challenges and potential patterns in a later blog…

When considering patterns, judge them against Agile and Lean principles and see how they stack up. After all, patterns should be helping us instantiate these principles.

Some thoughts and aims when considering suitability of team patterns to your context:

  • Deliver working software frequently – with “working” validated by a DoD that is as close to potentially shippable as practical.
  • Working software is the primary measure of progress – track your Story Cycle-time, from the first work to potentially shippable.
  • Maximise team autonomy and align to business value through shared purpose.
  • Optimise feedback loops to maximise learning, validation of assumptions and reduction of risk. Not just what, but also, how long!
  • JIT – retain plan and design flexibility as long as practical to maximise the amount of information before decision commitment.
  • Maximise collaboration – reduce impediments to information flow.
  • Just enough process and roles.
  • Use process tools wisely, if they get in the way of collaboration or create too much bureaucracy then change the way they are used.




Has the Burndown Chart Burnt-out?

The venerable Burndown chart has been a central plank in tracking progress and predicting likely release dates in Scrum since I can remember. Over the last few years, the Burnup chart has gained popularity – lets explore why.


The strength of the burnup approach is in its simplicity and clarity, providing a view of net progress which helps predict when a product might be ready for release.

Propagating the net velocity helps to show when the product might become releasable – the dotted blue line on the diagram.

Each data point of the burndown tells us how much “Work Remaining” there is at that point in time, i.e. the current amount of work left to release the product. The delta, the “Burndown”, between the points is the total of:

  • Backlog completed satisfying the Definition of Done (burnt)
  • New backlog added to the Release scope by the Product Owner
  • Backlog removed from scope by the Product Owner

Burndown Drawbacks

The simplicity and elegance of the burndown also turn out to be its weakness.

Agile approaches embrace change as a positive aspect to increase the value delivered to the customer and business. So, it’s no surprise to find that release scope often changes significantly over time (usually in an upwards direction!). In a burndown chart, it isn’t clear what contribution scope change is making to net progress, often the impression is that the team simply aren’t working hard enough.

There have been a number of approaches used to enhance the burndown chart to provide deeper insight but to my mind these attempts lose the chart’s strengths – simplicity and readability (they also look ugly!).

Although a  team’s throughput can and does vary, working overtime, stronger coffee and more pizzas only has a marginal impact and is usually not sustainable on an ongoing basis. It turns out that the biggest lever we have to impact a release date is the scope. This is where the burnup chart comes in.


The Burnup Chart

The burnup chart tracks the throughput of the team and the scope separately so that the growth in the backlog is made very visible to all stakeholders.

The chart illustrated to the right shows an all too common situation where the backlog grows at a similar rate to the team’s throughput.

We can add a forecast for likely team throughput based on their historical performance.

Forecast approaches range from simple standard deviation through to sophisticated Monte Carlo simulation, modelling the team’s throughput data (Probabilistic Forecasting).

Now we add an indication of how the scope may grow over time.

This chart shows a scope forecast range from zero additional scope through to an increase at the current average rate.


The intersection of the forecasts ranges provides something we call the “Landing Zone”. If we drop vertical lines down to the x axis from the intersection points is shows the release date range from the earliest likely, through to the latest expected dates.

Working with a delivery date range is much more healthy and realistic than forecasting to precise dates.




We have found that when stakeholders see the impact of scope on delivery dates it removes some of the emotion from the debate and instead creates a healthy discussion and focus  on those backlog items that are critical to release versus those that are not.

While the Burndown chart is an elegant and simple view of release progress, the Burnup drives more healthy conversations that help shape the Minimum Viable Product. If the Burnup is used all the way through a release, it will help prevent the pressure on the team to work at an unstainable level and/or reduce quality levels to hit an unrealistic scope and deadline combination.

Agile process tooling is catching on with growing support for burnup chart functionality. If you use JIRA, TFS or VSTS then take a look at SenseAdapt which supports the burnup and a multitude of other handy charts.

Enabling an ROI Field with TFS Rippler

Why have an ROI field in the Product Backlog

The goal of a Product Owner in a Scrum team is to derive the maximum benefit from the team’s time, in other words to maximise the Return On Investment (ROI). Where the “Investment” is the cost of running the team and the “Return” is the value of the Product Backlog Items (PBIs) that the team turn into working software.

To help the Product Owner achieve this aim, they need to understand the relative business value per unit effort for each of the backlog items, so that they can guide the team to work on the items that will realise the most value for the lowest team effort. This is why it is common practice for Product Backlogs to have an ROI field which is calculated by taking the estimate of business value and dividing it by the estimate of the required team effort to implement the PBI. We then use this field as guide to order the backlog so that the higher value items, per unit effort are towards the top – allowing the team to deliver greater value in their Sprints.

PBI value-effort


Enabling an ROI field in TFS

By default, a PBI in the standard Microsoft Scrum process template does not have an ROI field so the first thing we need to do is add the field to our Team Project.

Here’s a view of a PBI in the standard Scrum process template, showing just the Effort and Business Value fields highlighted in red.

PBI- NoROIfield

Add an ROI field to your PBI workitem definition and set is as type Double and configure it as Read Only in the Team Explorer form as it will be calculated by dividing the business value by the effort field. Something similar to this should do the trick in the Workitem Type Definition XML:

Field Definition

<FIELD name=”ROI” refname=”RippleRock.ROI” type=”Double” reportable=”measure” formula=”sum”>

   <HELPTEXT>Story ROI based on Business Value/Effort</HELPTEXT>


Form Definition:

<FIELD name=”ROI” refname=”RippleRock.ROI” type=”Double” reportable=”measure” formula=”sum”>

   <HELPTEXT>Story ROI based on Business Value/Effort</HELPTEXT>


The result should look something like this:

PBI- WithROIfield

The next problem to solve is how to automatically calculate the ROI value.

The standard way of calculating the ROI field is by using an Excel view of the backlog via a workitem query, recalculating the ROI every time the value or effort changes and then manually publishing back to TFS. Not a great solution!

Step-up TFS Rippler. This add on service for TFS has a number of useful capabilities, performing field calculations is one of them. For more information about what TFS Rippler can do and how to install it, have a look at the dedicated website or other related blogs.

We can setup a couple of XML rules in TFS Rippler’s TransitionRules.xml file that will instruct it to automatically recalculate the ROI field whenever the effort or value fields change.

An example of a one of the required XML rules is annotated in the image below:


When workitems are changed, TFS Rippler executes the instructions in rules that match the workitem change event. This example matches against changes to the Effort field in PBIs that are in the eligible states listed, and then updates the ROI field in the same PBI. We need a similar rule that triggers on changes to the Business Value field.

The complete set of XML for driving the automatic updates to the ROI field are shown below:



Some things to check:

ü  We recommend testing out your changes on a test collection & project first.

ü  Backup your rules files before you set about modifying them.

ü  Remember to match the workitem and field names in the rules to the names defined in your process template!

ü  Also pay attention to where you place the rules within the TransitionRules.xml files to ensure that the rules are in scope for the given Team Collection and Team Project.

For more information on TFS Rippler checkout the website.

Getting More Out of TFS with TFS Rippler

TFS Rippler is an add-on for Microsoft’s Team Foundation Server (TFS) that runs as a background service, providing some key benefits that you don’t get out of the box with TFS. TFS Rippler automatically and efficiently updates workitems based on a set of customisable rules, triggered by changes to workitem fields or states. Here are few examples of what TFS Rippler can be used for:

  • Populate an ROI (Return On Investment) field in your Product Backlog Item or Feature calculated by dividing the Business Value by the Effort field.
  • Aggregate values from child workitems up to a parent, for example calculating the Total Work Remaining to complete a Product Backlog Item, calculated from the remaining work on its Tasks.
  • Provide a Feature’s percentage completeness,  based on the Remaining Story Points effort versus the Total Story Points required to complete the feature, aggregated from its child Product Backlog Items.
  • Automatically moving a Product Backlog Item to In Progress as soon as any work starts on its Tasks

Here’s an example Excel Feature Dashboard that is driven by a number of TFS Rippler capabilities, working in combination to show a great overview of Feature progress for all stakeholders:


Outlined in Green we have the percentage complete calculated for a Feature by a calculation that compares the Remaining Story Points against the Total Story Points required to complete the feature (calculated using TFS Rippler’s aggregation facility).

In Pink outline we have the Total Story Points for a feature calculated by aggregating all the Effort estimates for the related Product Backlog Items – this total figure is then in turn used to help derive the Percentage Complete and ROI for the feature.

Outlined in Blue we have the ROI for the feature calculated by dividing its Business Value by the Total Story Points required to implement it.

In the next few blogs I’m going to show how to make all this work and delve into a lot more detail. For more information on TFS Rippler checkout the website.