The Burndown Chart Fallacy


Have you ever walked into an office and saw a huge flat screen on the wall displaying a dashboard with pretty graphs, or some other nice visualization showing some important looking numbers? Back when people still frequented offices, many had them. But have you ever wondered, what are they for? No engineer is looking at them, because when you are looking at data you want to interact with the data, zoom, pan or apply some filter. And if the data requires immediate action, you would set an alert. So what is the dashboard for?

Dashboards are a classic managerial fad. They look good, but there is no logic or mental grounding behind them. They are a superstition, spread word-to-mouth in meetups and networking events. To be fair, engineers have many of these useless dashboards as well, pretty colorful visualizations of “Load Averages” and “free memory” which look cool but show no actional data. The thing is, if you don’t know what a graph is for, what questions it answers, you shouldn’t be making it. A graph is created with intent to help answer a question; it presupposes some model which relates the axes and quantities of the graph. Without a model, a graph is useless - you might as well plot random data, which sounds bad but is actually better than using a model that you don’t understand or is downright harmful. Like a burndown chart.

Burndown chart

A burn down chart is a graphical representation of work left to do versus time.... It is useful for predicting when all of the work will be completed. It is often used in agile software development methodologies such as Scrum.

The burndown chart looks good. It gives a sense of progress, of accomplishment. Look how fast we’re burning down remaining work! But what what question does it answer? what is the model behind it?

predicting when all of the work will be completed

Have you ever completed all the work? have you even heard of a software company which “completed all the work”? no? not surprising, once you realize work is manufactured. In fact, there is someone at your company who is being payed to create more work - your product manager. The amount of work is arbitrary! So what is the use of predicting completion of arbitrary amount of work? does this prediction help you go faster? make your users happier? earn more money?

Then what is it good for? Ask a project manager, they’ll tell you. Likely a long speech about the importance of coordination between teams and making sure everything is going according to plan.

Isn’t the whole point of Agile to respond quickly and change the plan based on feedback from the field? Burndown charts, like Gantt charts, belong to a family of charts known as “control charts”. They are tools based on a mechanistic model of work, in which design is completely separated from construction work and the assumption is that the design is not likely to change. The job of managers is to make sure construction work does not deviate from the plan, and remains “predictable” and “follows specifications”. This is in contrast to the “learning” model which emphasizes short iterations and rapid adaptation of the work to new conditions. In this model, design and construction are combined, and predicting when “all the work” is completed is meaningless because the backlog is assumed to rapidly change. In the mechanistic model, you emphasize throughput1 - unit manufactured per time. In the learning model, you emphasize latency - time to response. And as Queueing theory tells us, there is a basic tradeoff between the two; if you optimize for throughput latency will suffer2. Thus the burndown chart being a tool for optimizing throughput, is downright harmful.

Which brings me to the hyped concept of “velocity”. Velocity is a metric of work per time unit, which means throughput. Yes, the units of work are somewhat arbitrary but it is a throughput metric nonetheless. I have no idea how “velocity” took such hold in the Agile world, nor do I care enough to investigate. But by optimizing for “velocity”, R&D managers everywhere are shooting themselves in the foot and making their companies less agile, sprint after sprint.

Beware of models you do not understand.


  1. As Goldratt showed, the pure mechanistic model is harmful even in manufacturing ↩︎

  2. There is one way to improve both latency and throughput - reducing variation, or as Deming put it: improve quality ↩︎

software-engineering systems-thinking management
comments powered by Disqus