Scaling Agile: How Many Teams Are Too Many?

This is  the second instalment of a “Scaling Agile” blog series. The first instalment was  “Scaling Agile: A Law And Two Paradoxes”. Before proceeding further, please, read it, as it sets the context for what follows.

In this post I’ll suggest a way to find an answer to this question:

Q1: How many people and teams can be added to the project to deliver more features in less time?

In other terms, how can the throughput of delivered features be increased? This isn’t always the right question to answer (almost never is) as focusing on throughput of deliverables is not the same as focusing on value delivered to the customers, but, in the following, I’ll make some simplifying assumptions and show that, also in ideal scenarios, there are some hard limits to how much a project can scale up. In particular, I’ll assume that the project meets all prerequisites for scaling (described in “Scaling Agile: A Law And Two Paradoxes”), which, among other things mean:

  1. Requirements are subject to ruthless prioritisation—i.e., non-essential, low value features are aggressively de-prioritised or binned. In this scenario there is a clear positive relationship between features and value delivered
  2. All teams currently in the project are working at peak effectiveness and efficiency—i.e., the existing teams are already as good as they can be, and they might (but not necessarily will) be able to do more only by increasing their size or their number
  3. There are effective metrics in place to measure, among others, productivity, quality, and throughput

Being able to answer Q1 is important as “deliver more faster” seems to be the main reason for scaling up in most large software projects. As it happens, some time ago, I was hired as a consultant in a very large scale agile project precisely to answer that question.

The very first thing I did was to survey the literature to find if anybody had already answered Q1. In the process, I discovered that the scaled agile literature has quite a bit of information about the pros and cons of component vs feature teams, but—despite this being a very obvious and important issue in the context of scaling—I couldn’t find much information that would help in answering it.

Looking further, I read again Fred Brooks’s “Mythical Man Month” book, and came across this quote (highlights mine):

The number of months of a project depends upon its sequential constraints. The maximum number of men depends upon the number of independent subtasks. From these two quantities one can derive schedules using fewer men and more months. (The only risk is product obsolescence.) One cannot, however, get workable schedules using more men and fewer months.

If you haven’t recognised it yet, that is Amdahl’s Law applied to teams. That made perfect sense to me. Here are a couple of important implications:

  1. The total time spent in sequential activities—anything that has to be done by one team at the time but affects most or all other teams, e.g., the creation of a common component or library, setting up some common infrastructure for testing and CI, etc.—is a lower bound for the time necessary to deliver the required functionality. The project cannot go faster than that
  2. The maximum number of independent sub-projects, in which the main project can be split, is an upper bound to the maximum number of teams that can be added productively to the project. Note that “independent” in this context is a relative concept—sub-projects of a big project always have some dependencies among them, and “independent” ones have just a few

The picture below (which is meant to be descriptive, not mathematically accurate) shows what can be achieved in practice—as you can see, it’s far less than the theoretical limits discussed above:

  • The straight line labelled “What managers hope” describes the typical attitude I’ve seen in many projects: managers add teams expecting to achieve linear scalability for throughput
  • The line labelled “What Amdahl says” describes the upper bound given by Amdhal’s Law, which tends asymptotically to a finite maximum value for throughput (remember the amount of sequential activities? That’s why there is the asymptotic value), therefore, even in the scenario in which the teams were completely independent from each other, after a certain point adding new teams would be pointless
  • The line labelled “Reality” describes what happens in reality. The throughput will increase much less than the theoretically predicted level, and will peak when the number of teams reaches a maximum k. That’s the point where communication and synchronisation issues start to become the predominant factors affecting throughput. Any more teams than that and the overall throughput for the project will go down. If you ever worked in projects with more than a (very) few teams chances are you’ve seen this happening first hand


There are three important things to notice.

The first is that the cost per deliverable will increase (or, equivalently, productivity will decrease) more than linearly with the number of people involved, and it may become unacceptable well before scaling to k teams.

The second is that the shape of the “Reality” curve above is independent on the type of teams—component or feature, or any mix of the two—and it will always below Amdahl’s curve.

The third is that, independently of any methodology or process used (agile or otherwise), the curves for the throughput will always resemble the ones above. In other terms, those relationships are more fundamental than the methodologies used and cannot be eliminated or avoided.

Now, suppose cost is not a problem, and that time to market is more important. To answer Q1 we can either try to calculate the value of k in some analytical way (which I don’t know how to do, or if it is even possible in some contexts), or we can do something else—i.e., add people, measure the effects, and act accordingly. The second approach is the one I suggested to my client. Specifically:

  1. When increasing the size of an existing team do the following:
    • Check with the teams involved if they need help—they might already be working at peak throughput, with everybody busy, but not overloaded, in which case they are better left alone
    • If increasing the size of the team is a viable proposition, do it incrementally by adding a few people at a time. Measure the effects (using the metrics you’ve got already in place). There may be a small throughput drop in the short term, but then throughput should increase again after not too long (e.g., a couple of sprints if using Scrum). If it doesn’t, or if quality suffers, understand the reasons and, if necessary, revert the decision remove the new members from the team
  2. When adding a new team to the project do the following:
    • Ensure that the scope of work is well understood and is sufficiently self-contained with minimal and clear dependencies on other teams
    • Start small. 3-4 people maximum with the required skills—including knowledge of the technologies to be used, and of the domain
    • The Product Owner for the new team is identified and available
    • The team is given all the necessary resources to perform their job—e.g., software, hardware, office space, etc.
    • There is an architect available to help the team proceed in the right directions according to the reference architecture of system
    • Measure the effects. There may be a small decrease in throughput in the short term, but then it should increase again after not too long (e.g, a couple of sprints if using Scrum). If it doesn’t, or if quality suffers, understand the reasons and, if necessary, revert the decision and remove the team from the project

As you can see, adding people might, in some circumstances, make the project faster, but there are some hard limits to the number of people and teams that can be added, and the costs will increase more (usually much more) than linearly with the number of people—even in an almost ideal situation. As my friend Allan Kelly says: “Software has diseconomies of scale – not economies of scale”.

If you, despite all the dangers, decide to scale up your project, and try to do so applying the recommendations above, I would love to hear your feedback about how it worked out.

The next instalment of this series will be about component teams vs feature teams.

Methodology à La Carte

à la carte |ˌä lä ˈkärt, lə|
(of a menu or restaurant) listing or serving food that can be ordered as separate items, rather than part of a set meal.

I’ve been uncomfortable with the mainstream discussions about software methodology for quite some time. It seems to me that far too many, in the software development community, are in a wrongheaded quest to find The Methodology that will solve all our software development sorrows.

So far we’ve had (just to mention some popular ones): Waterfall, Spiral, Evo, RUP, DSDM, FDD, XP, Scrum, Kanban, Disciplined Agile Delivery; and also some cross-breeds, e.g., Scrum + XP, Scrumban (Scrum + Kanban), etc.

We keep finding that each of those methodologies has many strengths, but also several weaknesses that make each of them applicable in some contexts but not, easily, in others. I think this will always be the case.

Let me explain.

Let’s first look at what a methodology is. The definition I like the most is this one by Alistair Cockburn, found in [1]

Your ”methodology“ is everything you regularly do to get your software out. It includes who you hire, what you hire them for, how they work together, what they produce, and how they share. It is the combined job descriptions, procedures, and conventions of everyone on your team. It is the product of your particular ecosystem and is therefore a unique construction of your organization.

According to the definition above, all methodologies in the previous list are more accurately described as methodology frameworks—they impose some constraints, and make some assumptions about the surrounding context, but leave many (important) details to the specific implementations (note that they include team dynamics and personal preferences. They are very important, but I’m going to leave them out for now).

Constraints and assumptions are both a strength and a weakness of every framework. If they are satisfied in the context where the framework is applied, then using the framework can save time, money, and grief. However, if they are not, using the framework can become difficult, if not detrimental.

For example, think of teams working in fixed length iterations that also have to deal with support issues and point releases outside their standard iteration cycle; I’ve encountered this problem several times with different teams, and the implementation of a solution has never been straightforward.

Another example is TDD. I’m a strong advocate of TDD, however, it is a practice that requires some level of proficiency, and, in some contexts, it is just too difficult to adopt straight away. Sometimes it is just better to start by writing unit tests without caring about when they are written—first or last—as long as they are there (and, before you lambast me on this, I know perfectly well that TDD is not only about testing, but also about design; however that’s not my point here).

I can give many more examples, but the point is that, whatever the methodology framework, some of its assumptions and constraints may not be valid in some contexts.

In my opinion, a better approach would be to create a methodology per project by mixing and matching sound practices, processes and tools—which can be borrowed from existing methodologies, or the literature, e.g., [2], [3], [4]—to fit the context and the needs of the project. This is what “à la carte” is about.

Mind you, I am not claiming to have invented or discovered anything—this is what effective teams have always done (and it’s an approach I’ve been promoting for quite some time [5])—but I think that we, as a community, need to have a different kind of discussion from one focused on promoting one methodology over the others.

Some people pointed it out to me that this approach looks like Crystal [4]. I’ve certainly been influenced by it; however what I’m describing here is neither a methodology nor a methodology family (like Crystal Clear and Crystal, respectively), since it doesn’t impose any constraints or assume any context. All it requires is discipline, mindful choices, and the willingness to improve.

That said, I think that there still is a place for methodology frameworks like the ones mentioned before. In fact, you may be in the lucky position where one of them works for you straight out of the box; however, if you are not, and you fear you may incur in some form of analysis paralysis, you can choose one as a starting point, then modify it as necessary—incidentally, this is what many Scrum and Kanban teams seem to be doing anyway.

I’ll be doing more work on this, and I’ll be speaking about methodology à la carte at the upcoming ACCU 2013 conference in Oxford, UK.

In the meantime, I’ll welcome your feedback.

  1. Cockburn, Alistair, Agile Software Development: The Cooperative Game (2nd Edition) (Agile Software Development Series), Addison-Wesley Professional, 2006.

  2. Beck, Kent and Andres, Cynthia, Extreme Programming Explained: Embrace Change (2nd Edition), Addison-Wesley Professional, 2004.

  3. Coplien, James O. and Harrison, Neil B., Organizational Patterns of Agile Software Development, Prentice-Hall, Inc., 2004.

  4. Cockburn, Alistair, Crystal clear a human-powered methodology for small teams, Addison-Wesley Professional, 2004.

  5. Asproni, Giovanni, Fedotov, Alexander and Fernandez, Rodrigo, An Experience Report on Implementing a Custom Agile Methodology on a C++/Python Project, Overload 64, December 2004.