I have spent the best part of the last decade reviewing code. Code of all
descriptions including everything from KM C & assembler code, to T-SQL
stored procedures and VB Script and everything I am compelled to look at, I
judge based on the same metrics. How difficult is it to
- understand the execution environment
- understand the points of failure
- understand the side effects of expressions & statements
In my personal case, I don’t really care how quick and easy it is to author
an application using a wizard, I care about how difficult my job of
reviewing and correcting your work is. Any yes, I am arrogant or foolish
enough to insinuate that your work, even though you hold a PhD and have
taught programming for many years and I have no academic credentials what so
ever, would need correction.
I suppose it is also a matter of standards too. Around here, I am
responsible for establishing coding standards and then upholding them no
matter how many millions of dollars a day they may cost. It does tend to
make one very particular about certain things; or long for the days when one
actually wrote code.
wrote in message news:xxxxx@ntdev…
It’s all about the protocols. Too many coders used MFC as if they were
programming for the raw API, and it doesn’t work well in that model. MFC
has a model for doing graphics, and if you follow it, the work is trivial
to get right. I have been programming in it since 1995 and have never had
the experiences you describe. It’s like the people who want Windows
drivers to look like their favorite RTOS and start doing things like
passing handles to event objects between user space and kernel space, in
the mistaken belief that events are somehow “faster” than
IoCompleteRequest (they aren’t; if anything, they can be slower). This
adds massive complexity to what should be a simple problem. I’ve watched
people who leared from the standard Windows API texts try to use MFC in
completely inappropriate ways, and complain bitterly when it doesn’t do
things the way they used to. (I was a 16-year veteran of the MFC
newsgroup, by the way). I can, or used to be able to, knock out an
MFC-based GUI app as fast as, or faster than, an VB programmer could.
It’s because I knew how to write to the MFC interface, and I never had
problems with “bugs”. Nor did I find the framework “fragile”. So I take
your view not with a grain of salt, but a very large sack of salt.
And I think you are confused about acronyms. GDI is doing graphics. I
have done some completely awesome graphics in MFC (for example, a shaded
cylinder that can be rotated to any angle in 3-space). There is GDI
programming, GUI programming (which includes menuing, controls, etc.) and
raw API programming. I never found an advantage to raw API programming,
and when I moved to MFC, my only distress was why it had taken me so long
to discover it.
In my Advanced Systems Programming class, I have the students build a
full-fledged GUI app using a dialog box, and we do it in 20 minutes, and
that is because I have to slow the pace to the slowest student. When I
build the same project myself, I can do the GUI part in under five
minutes. And get it right the first time.
It’s all in having the right paradigms in your head. Work against the
framework, and you’re screwed. Work with it, and things are easy.
Compare “inverted call” to the baroque event-notification crap people seem
to reinvent with frightening regularity. Building a complex solution to a
simple problem by ignoring the framework paradigms is a losing game.
joe
Respectfully, in my experience, GDI programming was easier and easier to
debug than MFC. As no sane developer will start a new project using
either
for the UI, it is probably a moot point, an perhaps it was a consequence
of
my utter loathing of significant ‘features’ of C++, but I always found
bugs
in MFC code particularly difficult to detect and the whole framework
fragile. GDI based code was long by comparison, but comparatively easy to
read and bugs tended to stand out, but this is all a dim memory for me
now.
C# has taken over in our shop for UI development and fortunately I no
longer
get sent any C++ or MFC based code to review.
wrote in message news:xxxxx@ntdev…
Pre-optimization in the absence of performance data rarely is productive,
and usually has a profound negative impact on the development time and
long-term maintainability of code.
If you don’t have the performance numbers for YOUR driver, you have no
basis for a decision about optimizations.
Lines-of-code optimizations usually gain single-digit-percentage
improvements except in very specialized cases (e.g., the inner loop of a
convolution algorithm). Architectural changes can give you orders of
magnitude improvement. Note that concepts like the “Fast I/O Path” were
developed because actual performance data indicated that the standard
mechanism was not giving adequate performance.
There were many articles on Win16 programming that were only one level
away from programming in assembly code, such as building dispatch tables
of pointers indexed by message number (and note that these tables had to
be HUGE). And there were those who said “MFC is inefficient”. Yeah, in
an absolute sense, but who cares? MFC code is easier to develop, and
easier to maintain, then raw Win32 API code, or even worse, “optimized”
code with direct-indexed dispatch tables. In fact, there are often two
complete traversals of the dispatch logic, one to decide which menu items
should be enabled and one to react to a menu item being clicked. But in a
machine that can execute several billion instructions per second, these
are not even detectable overheads.
Build your driver. Measure its performance. If you see problems, there
are optimizations that would not require dropping all the way back to WDM.
And most of the optimizations will probably be architectural.
joe
> [quote]
> The reason given for using WDM over KMDF was performance.
> [/quote]
>
> Anything SPECIFIC? A great many sins have been covered-up by saying
> they
> were committed in the interest of “performance.”
>
> [quote]
> I’ve never considered KMDF to be significantly slower than WDM, but I’m
> willing to accept I’m wrong.
> [/quote]
>
> When taken as a whole, end-to-end, a given KMDF driver is rarely slower
> than a comparable WDM driver. There may be specific, targeted,
> individual
> pieces of each driver that are faster or slower than the other… but
> unless those specific, individual, pieces are “make or break” for the
> driver then I’ve personally never seen “performance” as a valid reason
> to
> choose WDM over KMDF.
>
> Just “performance”?? It sounds a lot like people back in the day who
> continued to write code in assembler language instead of a higher-level
> language because of “performance”… when the real reason was that they
> were too lazy or scared to learn how to write proper code in that
> higher-level language.
>
> Peter
> OSR
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer