Gantt charts look cool. The ones I can make using MS-Project show task names, durations, assigned resources and milestones. All in color…in whatever fonts and fill patterns I want to use. In my experience, few things about a project proposal impress people so completely as a really nice-looking Gantt chart. Sad, but true.
One of the most important things a Gantt chart is supposed to show is the task dependencies in a project plan. Some of these task dependencies will be part of the critical path of the project. Good project managers spend a lot of time fretting about the critical path – as well they should. If any task on the critical path falls behind schedule, the whole project falls behind.
You don’t see a lot of Gantt charts on Agile projects though. In fact, a recent survey from Scott Ambler indicates that they are considered to be the least valuable project artifacts around. Is this because Agile doesn’t have planning? Not at all. This is because Agile methods have the fascinating effect of radically reducing task dependencies.
How can project dependencies, which seem inherent in a given problem domain, be reduced by the project method? Well, first, you have to come to grips with the fact that many things we think are dependencies in software are really not. For example, some people feel strongly that you should build the physical data model before building the application logic and that you should build the user interface last. This kind of thinking would align software with the stages of building a house wherein the database is like the foundation. The fact is, however, that software does not need to be built that way. In fact some people would argue strenuously against that particular sequence (as a trivia item, not everybody builds houses that way either).
So, perhaps building software in a certain order isn’t really required, but surely it is more efficient to do it a particular way, right? I’m not one to favor the idea that there is always one best way, but it does seem intuitive that it would be more efficient for 5 different developers to be working in parallel on 5 different features if they were all building the features into a clear and complete application architecture. That way they could, for example, take advantage of existing persistence and security mechanisms instead of inventing those wheels themselves in their feature code.
Recognizing that, the good PM will develop a project plan that has the architect building the architecture first and then the developers showing up to build the features next. Of course the architect will need to know the requirements of all the features in order to build the architecture correctly, so the analysts will have to show up first to get the requirements right… Before you know it, you have a project plan with the standard set of phases all lined up into one gigantic critical path – which looks really cool on a Gantt chart.
Let’s go back to the roots of the scenario though. It assumes that the persistence and security mechanisms correctly support the needs of all 5 feature areas being developed. That, in turn, assumes that the requirements of those feature areas were captured accurately AND that they won’t change notably, which assumes that the users can articulate the requirements properly to begin with. Those are all assumptions that research (citations) has shown to be, well, problematic at best.
The same Agile principles that eschew those assumptions are the ones that reduce dependencies during the course of a project. I have lumped some of these principles together wherein each combined lump contributes to reducing dependencies in a particular way. The specific lumps of principles that I have seen reduce dependencies the most are:
Agile Requirements Management, Iteration Planning and Self-Organization
When you have a granular, ever-changing list of software requirements (or features or enhancements or whatever) and the team prioritizes and commits to just a few of those each iteration AND the team then organizes their own way of delivering those requirements in the space of a few weeks or less, it is really, really hard to make a Gantt Chart.
How, for example, can you make a reasonable Gantt Chart of a list of 4 features, 9 defects and 3 tasks – most of which are rather unrelated to each other? The list is prioritized on business value and sometimes also technical risk. It is a consumer’s perspective. The list is not prioritized on what makes sense from a builder’s perspective. The question of, “what would be the most efficient way of doing it?” is only asked within the iteration when the team members are jotting down their rough sub-tasks for completing their particular scope items.
If, as a diligent PM, you scurry around and try to turn their notes and white-board checklists into a Gantt Chart, you’ll find the results to be very disappointing. You’ll probably have several lists of tasks, many of which are performed in parallel and some of which are semi-sequential, at best. If you are hopelessly cranked up on stimulants or suffer from OCD (most real PMs will nod to both), you will still attempt to fashion these lists into Gantt Charts or at least a Work Breakdown Structure (for pete’s sake you’ve got to manage something!), that way you could start lining out who is doing which assignments… Well, if your Agile project goes like the ones I’ve seen, the team members will display the irritating behavior of constantly moving around to different tasks as seems most effective to them within the iteration – regardless of what you’ve written down.
If you manage to finish a Gantt Chart or WBS it will likely only be because the iteration is almost over and you are able to record the history. Interestingly enough, nobody will be curious enough or even polite enough to ask to see your work. It simply is not helpful in any practical way for anybody to use on the project going forward.
The list of items being worked within any iteration is short and relatively simple and, for the most part, doesn’t need sophisticated work breakdowns, sequencing, lag buffering, etc. In other words, the iteration work doesn’t require sophisticated planning. It is small enough to be managed quite effectively using simple tools – basic task lists and verbal communication.
Generalist team members and Pair Programming
What about division and separation of labor? Surely it would be more efficient for the expert systems analyst on the team to do all the requirements and modeling work and then hand that off to the developers for programming. And then, surely, it would be more efficient for to same developer to work on the enhancement to the wobblerator feature and the 3 defects in the wobblerator, right?
Well, first of all, let’s talk about efficiency. No, on second thought, let’s not. Do this instead, go to Purdue University or some other institution that knows something about industrial engineering or production planning and take some classes. Or, check out some links on queuing theory and sub-optimization; or try stochastic modeling – that crap’s hard! . . . . .
Sigh, OK, don’t do that either. I was getting a little bitter. I have to say that I’ve become tired of ‘software’ people talking about issues related to production and efficiency as if they are actually well studied in the subject. Having recently moved closer to the manufacturing world, I’ll give you my best understanding of that stuff as it applies to software teams.
The gist is that, in certain situations, the overall efficiency (for the whole team or whole project) that you would gain by having a specialist team member doing all the work that they specialize in would not make up for the inefficiencies that would be caused by the queuing of work through that single resource. You would create a bottle-neck based on your resource. This will typically cause you to try to pipleline your whole project through a series of specialists, which would introduce more queuing problems. The fact is that you can do this kind of thing with manufacturing and gain real efficiency, but it is extremely difficult to do it with software development and gain real efficiency. You might, in some situations, gain some level of predictability, but at the cost of lower efficiency. The predictability arrives as the PM finally gets a handle on the necessary buffers to put in place to keep all the resources fed with a pipeline of work (and if a hundred other things fall into place). The lower efficiency takes of form of Parkinson’s Law; as the resources simply sit, more or less idle, using up their buffer time without doing any more work. Few people ever think or ever admit that they are really doing nothing for hours at a time on projects like this, but I have seen it. It’s for real.
Whew, I ‘m glad that’s over.
Now, if you have team members that will lower themselves to wear many hats, you can avoid some of the problems of resource bottlenecks. For example, everybody is qualified enough to ask what the requirements are of a single feature; and if not, they can pair up with the requirements specialist (who happens to still be on the team) for a couple hours. After they do that a few times, guess what? They start learning how elicit the requirements themselves. Maybe never as well as the specialist, but well enough to get the work done and reduce the resource dependency. In the meantime, the requirements specialist has turned into quite a capable tester – having paired up with a really good tester on the team – who has been writing some end user documentation.
As developers pair up, I have (as a PM-type) been amazed at the sense of well-being I get from knowing that anybody on the team could get hit by a bus and it wouldn’t be too terrible – for the project. Well, that’s not nice to say, but it’s true. I have seen pairing provide enough cross-pollination of knowledge that not only can key team members leave the project without killing it, but new team members can join the project and almost immediately become productive. No kidding.
That reduces, dramatically, the number of dependencies due to resource constraints.
Refactoring and Regression Testing
Well this post is so long already it’s just ridiculous, so let’s get right to the point on this one. Clear back toward the top of this, we talked about the example of getting the foundation of the software in place prior to building the features. The magic of automated regression testing (a la [N]Unit and other nifty tools) allows developers to start developing basically anywhere and evolve the solution (features, architecture and all) into place over time. Without massive regression testing, it would be a nightmare to attempt to fundamentally change something about the architecture after numerous features have been developed. With massive regression testing, it is not too big of a deal. No kidding. I’ve seen it work, it’s still amazing to me, but it worked.
Allowing the team to reserve some of their capacity for refactoring (or continuous design as some have called it) and supporting the ability to refactor without terrible consequences allows you to drop even the intuitive and ‘safe-feeling’ macro-level sequencing like ‘gotta get the architecture right first’.
Now, after all that, keep in mind that while going around your project incinerating dependencies like abandoned buildings might sound like fun, nobody will laugh at you if you start slowly and let your Agile projects prove themselves capable bit by bit. Well, OK, some people will laugh. But some always do.
Would you like to argue in favor of more dependencies on software projects? Post a comment. I dare you.