The idea of the "mythical man-month"

“The Mythical Man-Month” by Fred Brooks was published in 1975, but the idea was stated at least as far back as 1969 in “Compiling Techniques” by F.R.A. Hopgood, about writing a compiler: “There is an optimum number of people who can work together on a project of this kind (probably about four) so that, although four people could produce the compiler in a year, it would be unlikely that eight people could produce it in six months and almost certain that sixteen people could not produce it in three months.”

1 Like

Given the state of art of software tools, in the 1960’s
that made sence. Better tools I would guess that would drop
time by half. Knuth did a compiler as summer project.

I think that you missed the point of the quote. This is not about how long it takes to write a compiler; this is about the relative amount of time it takes to write the same program with diferent numbers of developers working on a project.

That is to say, if writing a particular program takes x time for one person, it could probably be done in something close to x/4 with four people, but almost certainly not in x/16 with sixteen people. Lowering the value of x doesn’t improve that ratio. (In fact, it probably makes it worse.)

And it was certainly possible in 1969 for someone to write a compiler as a summer project, if it was a reasonably simple compiler. (I had a friend who did exactly this in the early '80s on a 64K CP/M system, which was probably less powerful than the mainframes commonly available in 1969 and certainly had worse tools.)

And even today, with much better tools, GCC or the Glasgow Haskell Compiler could not be rewritten from scratch in a year even by half a dozen experienced full-time developers. There are just too many things that went into those over decades of development.

“sublinear performance gains” - similarly to having multiple CPUs, the communication costs become ever greater.

Thanks @rsmilward, interesting to see that earlier mention. I wonder if it was already known in more general engineering circles, that team sizes don’t bring linear improvement. (One advantage of a team is that you can bring in diverse specialisations and diverse levels of seniority - less experienced people are cheaper, and there’s an institutional benefit to giving them the experience they need.)

I haven’t read it (yet) but Charles Babbage wrote a detailed analysis of industry with what we’d now call an economist’s perspective, undertaking site visits and seeing what really happens:

Babbage’s purpose in writing On the Economy of Machinery and Manufactures was to examine “the mechanical principles which regulate the application of machinery to arts and manufactures” (p. iii). The book is, in invaluable for its detailed, nontechnical descriptions of the manufacturing technologies that were employed in English workshops at the beginning of the 1830s. Babbage had, himself, travelled extensively through the industrial districts of England as well as continental Europe. And he was, as we know from his other remarkable accomplishments, no casual observer. On the contrary, he saw everything through the inquiring eyes of someone searching for more general underlying principles, categories, or commonalities. He sought, continuously, for some basis for classification and meaningful comparison. In brief, he wanted to illuminate his subject matter by rendering it subject to quantification and calculation.

However, with a quick skim, I see Babbage mostly concentrates on division of labour and the scaling of industrial processes - communication costs don’t seem to be there. I find only this hint, about manual effort, justifying steam power:

…if still larger bodies of men or animals were necessary, not only would the difficulty of directing them become greater, but the expense would increase from the necessity of transporting food for their subsistence.

I’m thinking the man-month scaling problem arises most clearly when information is being processed, not matter. In fact not even that - the scaling up of the production of tables, discussed in Babbage, is rather mechanistic. The man-month problem comes when we are tackling things we don’t quite understand, where information flow is not unidirectional, where we are learning as we go.

I find this very interesting and I am certainly glad to see the reference; however, I think there is a subtlety here in that Brooks went so far as to claim that, not only does adding personnel not have a linear effect, it actually slows things down. From the original Essay The Mythical Man-Month (printed in chapter 2 of the book of the same title):

Oversimplifying outrageously, we state Brooks’s Law:

Adding manpower to a late software project makes it later.

(Emphasis original)

1 Like

I read the Fred Brooks book in 1976 and saw the ‘law’ confirmed over and over.

A story I told everytime to the ‘manager’ who told me to hire more programmers was:

It takes one man an a woman nine months to produce a baby. Throwing more man in may be fun, but it still take nine months for the baby to arrive!

That just proves men are SLOW.
I was thinking, can you add more men to a project?
1960’s Two programmers, a Card punch person and
and Superviser sounds ample to me.
Many times a program is undefined, until it has gone thru
a few revsions. Not having read the book, was that a factor
in slowing things down. Alogol did not take years to write
but years to define. Ben.

But in manufacturing team sizes do often bring linear improvement. If you sit someone down at a workbench with a can of black paint, a small cube, and instructions on how to paint the faces, you soon end up with a die for use in playing games. If you make up a second workbench with a can of red paint, a small cube and the same instructions, and sit someone down at it, you now are producing two dice, one with black spots and one with red spots, in the same interval of time.

This works the same way in computing. Put together an input file, a computer program and a computer to run it, and soon you get your output file. Add a second different input file, the program, and a second computer and you are now producing output files at twice the rate.

Of course confusion always arises when people try to equate manufacturing not to it’s real parallel in IT, the running of a program, but instead to design and preparation of a program, which is actually parallel to picking out an appropriate type of brush and writing up the list of instructions for the painter. If that confusion is avoided, I think that most people have no difficulty understanding that ten people will not pick out the right brush to use in that process ten times as fast as one person would.

I expect that if you go back quite a ways in time you’ll find evidence that people understood that inventing things usually can’t be sped up linearly by adding more people.

And indeed it does, when managers try to add personnel to speed up the one section of the project which is taking time. Then it’ll slow down more, or even crash completely.
Of course there’s a right way to add personnel to a project too, and that’s when the project is well sectioned. In my youth I was brought in to a project to write a DMA driver, which didn’t slow down the project at all - they had physicists to work with the FFTs and algorithms, programmers to work on the communication protocol, technical people to write the test procedures, and so on. But nobody to write the DMA driver.

But that’s not what the “MMM” is about. That’s production, not development. Completely different problems. Consider car developement. You can’t throw 1000 engineers at a car and develop one 1000 times faster than a single engineer. The components aren’t that fine grained, and interfaces that clear cut. Plus there’s a slew of unknown dynamic interdependencies that are difficult to predict (especially in the days before computer modeling).

You also can’t even throw 1000 mechanical draftsmen at the project, hoping to speed it up. It can help, but eventually the bottle neck becomes the engineers requesting the drawings. They simply can’t feed them enough work. As the interdependencies grow, the communication starts impacting the actual work.

There’s a great saying “I demand constant detailed reports as to why you’re late”.

Eventually the cost of communication overwhelms actual progress, and without communication, you have idle teams – teams that need to be brought up to speed and trained. How many times have you done something and someone has asked if they could help that by the time you bring them up to speed on what needs to be done, you could have had the job done yourself.

So, there’s a balance. Historically, small teams in software have had very good velocity.

I’m not sure what the “MMM” is, but the rest of your post seems to be a restatement of my post: computer programming is not like manufacturing widgets but to desiging the system to manufacture widgets.

MMM would be “Mythical Man-Month”, of course.

The interpretation I’ve heard is that the reason for Fred Brook’s observation was that the communication overhead would outweigh any gains in adding people to the project. Although the number of people in the team might increase linearly, the number of communications links between people would grow by the square of the number of people (n*(n-1), to be exact).

<pedantic>n*(n-1)/2</pedantic>

4 posts were split to a new topic: Bootstrapping the GNU Compiler Collection

I would suggest that history has proven that a lot of good projects can be achieved in record time with collaborative partnerships.

I am thinking the likes of Bill Gates and Paul Allen, Steve Wozniak and Steve Jobs, Ken Thopson and Denis Ritchie.

It could be argued that there were others involved in these projects, but primarily the bulk of the work was achieved by a team of two.

We are talking a vastly different age, the first half of the 1970s, where whilst development tools were simpler, there arguably fewer distractions, such as the barrage of pop-up notifications we now all have to deal with each day.

If you only had access to a mainframe for a few hours at night, you would be entirely focussed on the single task ahead.

As the number of engineers is increased, so do the communication and management overheads. In my last job I spent 40% of my time accounting for my time to pointless management information systems. Needless to say I resigned from that job.

Some of my best projects have involved just myself and another engineer. I would take on the hardware and pcb layout tasks - develop the prototype hardware, debug the hardware and then hand it over to a software engineer colleague.

It is true that some of the best known computer scientists have achieved tremendous work, by ostensibly working alone. Knuth, Wirth, Moore, Hopper and others. By working alone, they have managed to acheve maximum productivity without disruption to thought proces by human or electronic interrupts.

At the other end of the spectrum are the massive projects - such as the Apollo project that consumed 100’s of thousands of engineer, technicians, sub-contractors and managers for a period of approximately 11 years. I can only imagine the unbelievable overheads of communication and coordination of such a project between such a large number of organisations. Probably only achieved by the huge budget availure and “failure is not an option” mentality applied to the Apollo program.

So whilst The Mythical Man Month suggested that 4 collaborators was optimum, I would profer that somewhere between 1 and 4 is historically more likely - entirely depending on the nature and size of the project.

And perhaps depending on the people involved. Some developers hate working with others, and seem to prefer to go into isolation to do their work. I’m quite the opposite; I find that interacting with others, even if they’re not expert in the systems I’m working on, improves my work. In particular, it speeds things up because the frequent review keeps me from going too far down the wrong path when I’m working on something.

(In my experience, the “wrong path” thing does happen with the “isolation” types as well, but they often just grind through it and it’s not addressed until someone comes back later to fix something in the code or add another feature. Or just never addressed at all, and the cost burden just borne.)

It will also depend on the type of project, I suppose. If it’s “write this web site again, just like you wrote it before,” what to do is probably fairly well defined. My projects tend to be, “somehow deal with this problem that we’re having a lot of difficulty even defining.” Generally, my workload seems to be around 35% problem definition and analysis, 35% developing tests (which always includes more definition and analysis), and 30% or less writing the actual code that runs in production.

This is not entirely relevant, but I met Fred Brooks years ago at a Siggraph conference and he was SO incredibly nice and gracious. I was suffering through giving a tech demo that almost nobody attended and he was so kind to me in a moment of semi-despair. I felt like perhaps some part of the empathy and humanity that gave rise to Mythical Man Month was being exhibited on a human, personal level.

3 Likes