RE: Developer productivity

> I have heard about 20 SLOC/hour being the realistic developer’s

productivity as
a long-term (months) average - in a short burnout period, the developer
can
surely write more then 20 SLOC/hour.

The productivity numbers I talked about were NOT about how many lines of
code can someone type into an editor, they were about how much actual usable
work does a programmer get done over longer time periods. You have to take
into account the week you spend debugging an ugly problem and wrote zero
lines.

The link at http://www.theadvisors.com/langcomparison.htm has some not too
old numbers (1996). This link is essentially a free link to older numbers
from SPR Inc (who analyzes project data and charges for what they find). For
the C language it says 128 average lines of source code per function point
and the “language level” for C as 2.5. Another table on that page says the
productivity average per staff/month for language levels 1-3, which would
include the C language, says you get 5-10 function points per staff month.
Doing the math that means you get 640-1280 lines per staff/month. Converting
to staff years gives 7,680-15,360 lines of C code per staff/year. If you
have no life outside of work, and spend all your waking hours doing
development, then these numbers are probably low.

I added up the source files for the e100bex Ethernet NIC in the DDK\src
branch as 13632 lines of code (.c + .h files).

Just as a reality check, I’d be curious to hear from people on this list how
long they think it would take to write the e100bex driver, 3 months, or
would it take more like a year?

Using source code metrics to judge staff productivity is a little bogus, as
if you’re being judged based ONLY on source lines, it’s really easy to have
a lot of comments.

As a way to estimate project cost, and staffing requirements, I think source
code size estimates are quite valuable. I can look at the DDK and say a
device like a e100bex takes about 13,000 lines of C code, which I would
probably translate into “about a man/year”. If a project is going to take
about 300,000 lines of code (anybody know how many lines in the NVidia
drivers, the binaries are like 4 Mbytes), then you are probably going to
need a staff of like 10 developers over 3 years. As projects get bigger,
software written is not a linear function of staff size (read The Mythical
Man-Month), as the communication overhead becomes non-trivial.

My guess is Microsoft has looked at their development productivity quite a
bit. My guess also is they don’t have any magic that makes their
productivity that much different than the rest of the industry.

By bias is that the computer language you use matters quite a bit. If you
look at the productivity tables mentioned above, they suggest that writing
in C++ has about twice the productivity as C. This matches my experience of
C vs. C++, and only applies if you are skilled on both languages. I’ve heard
a developer say he could do a driver faster in C than C++, and I don’t doubt
that was true for that developer at that time. The question is if that
developer became very skilled in C++ would they still say they could get
more done on C. I know it took me a number of years before I actually
believed C++ was a better language than C for driver development, but now I
strongly believe it to be so.

I do think it would be useful for Microsoft to look at the development of
Windows, and have a section in the DDK about how to estimate the complexity
of different kinds of drivers. I know my current client was a little bit in
shock when I told them how much work I thought their project would be. Of
course maybe having unrealistic estimates is a requirement to convince the
venture capitalists to give you money (so staying ignorant gives management
a good excuse). I’ve seen more than one startup fail because their
assumption about Windows driver development costs were very wrong. Hardware
companies especially, often put all their resources in developing the
hardware, the drivers become an afterthought with no budget. This is a
significant reason why the quality of so many drivers is so bad. The only
solution seems to be better education and estimating info. Perhaps OSR
should do a training session for management on how to make realistic
estimates of driver projects.

  • Jan

Opinion, not technical information contained herein. If you don’t want to read a bunch of pure opinion, hit delete now.


I find most of the “software science” work to be total junk science. Just a load of shite that’s been collected and generalized across dis-similar projects, with lots of unsubstantiated assumptions, lumping together devs who write GUI apss with devs who bang bits in hardware, and drawing totally unfounded conclusions. Just like most business and marketing plans, they’re not worth the cost of storage needed to hold them on the web.

Programmer productivity comes down to three factors:

1) The talent of the developer – How well the dev can translate engineering concepts into code.

2) The knowledge of the developer – How well the dev understands the problem, its constraints, and its solution universe.

3) The resources available to the developer – How well the dev’s work is facilitated with the right “stuff” he/she needs to get the job done (whether that’s specs, working hardware, hardware info, comfy chairs, pizza, dancing girls, test machines, or whatever).

In the driver world… A talented dev, who understands the device well, who understands Windows internals and the IO subsystem thoroughly, and who is given WORKING hardware, adequate machine resources, and who is LEFT ALONE to concentrate on working on only ONE project (without having to attend useless meetings, answer retarded emails, or be randomized about other projects or issues) can be EXTRODINARILY productive.

What’s EXTRODINARILY productive? It will differ for each of us, based on factors 1 and 2, plus our ability to focus and tolerance for pain. While I am certain you will doubt this, I have been able to average about 1K commented source lines of code per day (that’s 10 hours, I don’t live in Europe)… for weeks at a time… under ideal circumstances.

The ability to work under such ideal conditions is, indeed, quite rare. So, which of the above distractions does software science count in coming up with their “scientific” estimates? Based on this how many lines of code can I write next month? Yeah, well, it depends.

Under real-world conditions in my experience, most DEVICE drivers take 3 months to write. No less, no more. This includes having to tear your hair out trying to get the device to work (cuz the specs are wrong and the chip vendor assumed you’d “fix it in the driver”), and having to deal with your manager-troid enough to keep them off your ass while you do work.

Add to the 3 months whatever you want for intensive testing and beta release.

That’s real life.



Really, in this case the driver’s aren’t an “afterthought”. Rather, the company has most of its added value and differentiation in the hardware. The driver is just a costly and annoying step – like a tax – to get their device to work on Windows (or Linux or BeOs or whatever).

When the VALUE (differentiation, value-add, whatever) of their product isn’t in the driver, companies tend to look at drivers as commodities. They could pay OSR N-Thousand dollars to write their driver, they could pay some local bonehead per-hour contractor N/5 thousand dollars to write their driver (some rate per hour until the driver’s done or they give up), or they can pay a driver writer in N/10 thousand dollars to write their driver and hope the comments make sense and are in the country’s native language. Cuz “it’s just a pain-in-the-ass-anyhow” and it’s not the major part of the product, why not spend as little as possible. Makes sense to me. That’s why here at OSR we very rarely even bid such projects.

A few months back, I had a dev manager ask me to quote a project. I told him we’d do it for something like $45K. He said that said they could do it for $2500. I told him “Hey, if you think they’ll deliver what you want, then you should have them do it. Seriously. Why pay what we’re asking? It’d be dumb.” The project never got done, by either firm. It was too hard for and OSR was too expensive. Whatever.



LOL… I’m thinkin’ it’d be a short training session :slight_smile:

Peter
OSR

Chasing the White Rabbit …

The only place I have ever worked that used, and believed in, the metrics
they used, for estimating productivity, though it was more an estimate of
how long to delivery, was Honeywell, and that was primarily because the
United States Navy had this penchant for solid estimation. I was told that
the metric was developed by taking every line of code in every project
they had worked on for a military contract over about a 20 year period and
then applying whatever formula that was used to determine the lines of
code per week produced from start to delivery. The metric I was given to
use when estimating my projects was 40 lines of high level languages
(HLL), and 60 lines of assembly. That was in 1981-1986 when C was newly on
the scene and in most cases the HLL was COBOL, FORTRAN, or maybe PASCAL.
Kind of low, but remember we did not have word processors or graphical
interfaces to do the pretty printing. Some poor dumb schmuck had to take
my flow chart (yes I said flow chart) and set it up for printing,
Gutenberg style.

In 30 years of doing this I have found that metric of 40/60 was a fairly
good estimate. However … most managers I have worked for other than at
Honeywell gave little credence to those numbers, and yet when I look at
every project I have ever worked on, I find a very amazing tracking to
those numbers. But like Peter said, assigning numbers to how long it takes
a developer to write driver Y for device X is affected by many things “the
Walrus said … of ships and sails and sealing wax, of cabbages and
kings”, and mostly like Alice, it is a much to do about nothing, chasing a
white rabbit down an ever descending hole in the ground.

I think it was Knuth, who likened writing programs to writing literature.
Does anyone remember Tangle, Web, and Weave? Given that, the query becomes
what was the Bards productivity? Given all the lines he wrote over his
entire productive career, what were his total lines per week for sonnets,
or plays? Could he have accurately estimated how long it would take to
write Julius Caesar? If he had one of today’s pointy haired bosses, he
would have been expected to be “productive”, and then we would have ended
up with “Nightmare on the 17th”.

The problem with any metric is that they never take into account
“inspiration”, and unfortunately inspiration mingled with aptitude and
experience can screw up any metric out there. I’ve seen damned easy
projects take months to do, and damned difficult projects take weeks, both
with about the same lines of code total. Why? Ask the Mad Hatter.

Gary G. Little

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@osr.com
Sent: Monday, April 10, 2006 3:36 PM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] RE: Developer productivity

Opinion, not technical information contained herein. If you don’t want to
read a bunch of pure opinion, hit delete now.


I find most of the “software science” work to be total junk science. Just
a load of shite that’s been collected and generalized across dis-similar
projects, with lots of unsubstantiated assumptions, lumping together devs
who write GUI apss with devs who bang bits in hardware, and drawing
totally unfounded conclusions. Just like most business and marketing
plans, they’re not worth the cost of storage needed to hold them on the
web.

Programmer productivity comes down to three factors:

1) The talent of the developer – How well the dev can translate
engineering concepts into code.

2) The knowledge of the developer – How well the dev understands the
problem, its constraints, and its solution universe.

3) The resources available to the developer – How well the dev’s work is
facilitated with the right “stuff” he/she needs to get the job done
(whether that’s specs, working hardware, hardware info, comfy chairs,
pizza, dancing girls, test machines, or whatever).

In the driver world… A talented dev, who understands the device well,
who understands Windows internals and the IO subsystem thoroughly, and who
is given WORKING hardware, adequate machine resources, and who is LEFT
ALONE to concentrate on working on only ONE project (without having to
attend useless meetings, answer retarded emails, or be randomized about
other projects or issues) can be EXTRODINARILY productive.

What’s EXTRODINARILY productive? It will differ for each of us, based on
factors 1 and 2, plus our ability to focus and tolerance for pain. While
I am certain you will doubt this, I have been able to average about 1K
commented source lines of code per day (that’s 10 hours, I don’t live in
Europe)… for weeks at a time… under ideal circumstances.

The ability to work under such ideal conditions is, indeed, quite rare.
So, which of the above distractions does software science count in coming
up with their “scientific” estimates? Based on this how many lines of
code can I write next month? Yeah, well, it depends.

Under real-world conditions in my experience, most DEVICE drivers take 3
months to write. No less, no more. This includes having to tear your hair
out trying to get the device to work (cuz the specs are wrong and the chip
vendor assumed you’d “fix it in the driver”), and having to deal with your
manager-troid enough to keep them off your ass while you do work.

Add to the 3 months whatever you want for intensive testing and beta
release.

That’s real life.



Really, in this case the driver’s aren’t an “afterthought”. Rather, the
company has most of its added value and differentiation in the hardware.
The driver is just a costly and annoying step – like a tax – to get
their device to work on Windows (or Linux or BeOs or whatever).

When the VALUE (differentiation, value-add, whatever) of their product
isn’t in the driver, companies tend to look at drivers as commodities.
They could pay OSR N-Thousand dollars to write their driver, they could
pay some local bonehead per-hour contractor N/5 thousand dollars to write
their driver (some rate per hour until the driver’s done or they give up),
or they can pay a driver writer in
N/10 thousand dollars to write
their driver and hope the comments make sense and are in the country’s
native language. Cuz “it’s just a pain-in-the-ass-anyhow” and it’s not
the major part of the product, why not spend as little as possible. Makes
sense to me. That’s why here at OSR we very rarely even bid such
projects.

A few months back, I had a dev manager ask me to quote a project. I told
him we’d do it for something like $45K. He said that
said
they could do it for $2500. I told him “Hey, if you think they’ll deliver
what you want, then you should have them do it. Seriously. Why pay what
we’re asking? It’d be dumb.” The project never got done, by either firm.
It was too hard for and OSR was too expensive.
Whatever.



LOL… I’m thinkin’ it’d be a short training session :slight_smile:

Peter
OSR


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

One thing that I find conspicuously missing in all of these models is a
good way of estimating how many lines of code it will take to
*correctly* implement the desired functionality.

I find that that’s the biggest variable between a lousy programmer and
an uber-hacker. The best of us can reduce a complicated problem down to
a really simple and short solution, and the worst of us take three times
as much code to do the same thing a third as well.

I don’t know about you guys, but we all cheer in code reviews whenever
someone fixes a bug by removing 10% of the code in the file.

All the arguing in the world about whether developers can write 40 or
400 debugged lines of code per week won’t help if you don’t know whether
your driver will require 5,000 or 30,000 lines of code (remember that
~4000 of those are boilerplate unless you use WDF, so that’s really a
factor of ~25x :-).

Personally, I find it a lot easier to estimate based on features rather
than lines of code. As in: this feature will take about X, this one
about Y, etc. As long as there are enough of them to allow the law of
large numbers to come into play I find this approach just plain works
better than trying to SWAG out how many lines each feature will take and
how many lines of code the developers can write. Some things are just
harder to get right than others.

It does take a certain amount of experience and chutzpah to just throw
out these estimates with an air of authority. But, frankly, nothing
other than real world experience with a lot of projects will save you
anyway.

Ray

You know…

Mr.Scott said to multiply your estimates by a factor of 4. Then you’ll
look like a miracle worker if you do any better!

Just need to find a bunch of clients to pay the 4x “Adder” - but they
are out there!!!

My 2 cents

Steve Spano
President, Finger Lakes Engineering
(V) 607-277-1614 x223
(F) 800-835-7164
(C) 607-342-1150
xxxxx@flconsult.com
www.fl-eng.com

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Ray Trent
Sent: Monday, April 10, 2006 7:07 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] Developer productivity

One thing that I find conspicuously missing in all of these models is a
good way of estimating how many lines of code it will take to
*correctly* implement the desired functionality.

I find that that’s the biggest variable between a lousy programmer and
an uber-hacker. The best of us can reduce a complicated problem down to
a really simple and short solution, and the worst of us take three times

as much code to do the same thing a third as well.

I don’t know about you guys, but we all cheer in code reviews whenever
someone fixes a bug by removing 10% of the code in the file.

All the arguing in the world about whether developers can write 40 or
400 debugged lines of code per week won’t help if you don’t know whether

your driver will require 5,000 or 30,000 lines of code (remember that
~4000 of those are boilerplate unless you use WDF, so that’s really a
factor of ~25x :-).

Personally, I find it a lot easier to estimate based on features rather
than lines of code. As in: this feature will take about X, this one
about Y, etc. As long as there are enough of them to allow the law of
large numbers to come into play I find this approach just plain works
better than trying to SWAG out how many lines each feature will take and

how many lines of code the developers can write. Some things are just
harder to get right than others.

It does take a certain amount of experience and chutzpah to just throw
out these estimates with an air of authority. But, frankly, nothing
other than real world experience with a lot of projects will save you
anyway.

Ray


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Steve Spano wrote:

You know…

Mr.Scott said to multiply your estimates by a factor of 4. Then you’ll
look like a miracle worker if you do any better!

Actually, this is not as silly as one might think. In my experience,
every programmer has their “K” factor. When that programmer says “this
project will take X man-weeks”, the manager has to learn what that
programmer’s K is, and do the multiplication before building a
schedule. The “K” for any given programmer tends to be constant.

Personally, my K factor is about 1.5. If my gut feel estimate comes out
to 3 weeks, I quote 5 weeks to the client, and I’m usually pretty darn
close.

The trick, of course, is getting clients to pay for your mistakes before
you figure out your own “K”.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

I always call it the GKW (god knows what) factor and add at least 10%,
though a 1.5 multiple does sound better. :slight_smile:

Gary G. Little

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@probo.com
Sent: Monday, April 10, 2006 6:42 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] Developer productivity

Steve Spano wrote:

You know…

Mr.Scott said to multiply your estimates by a factor of 4. Then you’ll
look like a miracle worker if you do any better!

Actually, this is not as silly as one might think. In my experience,
every programmer has their “K” factor. When that programmer says “this
project will take X man-weeks”, the manager has to learn what that
programmer’s K is, and do the multiplication before building a
schedule. The “K” for any given programmer tends to be constant.

Personally, my K factor is about 1.5. If my gut feel estimate comes out
to 3 weeks, I quote 5 weeks to the client, and I’m usually pretty darn
close.

The trick, of course, is getting clients to pay for your mistakes before
you figure out your own “K”.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer