C++ RTL for NT kernel-mode drivers

/kernel, which is the default for compiling drivers in WDK 8 and 8.1 does not allow the use of C++ exceptions or RTTI at compile time (as opposed to previous failures at link time due to unresolved symbols)

d

-----Original Message-----
From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@seznam.cz
Sent: Monday, November 11, 2013 11:02 AM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] C++ RTL for NT kernel-mode drivers

Object classes and their interfaces are very similar in 2 frameworks.

It doesn’t appear strange, because both ‘KMDF’ and ‘IOKit’ are exposes the terms for device driver developing. And in turn those terms are just dictated by the hardware architecture. Such an explanation seems to look plausible.

I remember seeing CF in the Darwin kernel/IOKit, IIRC OSDynamicCast is in CF.
Probably I’m wrong. It was like 2.5 years when I was looking at it seriously.

Oh, well, I’v just seen the xnu sources and the OSDynamicCast is defined here:
xnu-2050.22.13\libkern\libkern\c++\OSMetaClass.h
as the macros going into the internal implementation.

and as a prove that ObjC is banned from the kernel look at this stuff from xnu -)
#ifdef KERNEL
#include
#include
#else
#include
#include
#endif /* KERNEL */
and take into account that ‘CoreFoundation.h’ is a part of the user-land libs.

>also it exposes
>some kind of reflection.
>
>This is the feature which C++ lacks. And that’s why “lack of standard RTTI” is
>not a drawback. People use their own.

Yes, well, RTTI at the moment is not a popular solution. It may be useful for classes with a complex multiple and virtual inheritance but these classes in turn are difficult enough to project and they are hardly to be recommended for a all-round usage.
But by the momemt it was decided to include RTTI to the kernel RTL, almost all the necessary prerequisites had been ready - the EH-lib, the internal-compiler-data iterators lib, and the basic knowledge of how-to. Moreover MS had MSVS2012RC released at that time, where the rtti sources were included so all the unclears had been gone and the kernel rtti became ready fast enough -)


NTDEV is sponsored by OSR

Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev

OSR is HIRING!! See http://www.osr.com/careers

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

> /kernel, which is the default for compiling drivers in WDK 8 and 8.1 does not

allow the use of C++ exceptions or RTTI at compile time (as opposed to previous
failures at link time due to unresolved symbols)

It appears not to be a problem at all because of the RTL build system doesn’t use the WDK-build nor the MSBuild nor the VS projects/solutions and is completely custom and based on gnumake. It sets up the compiler/lib/linker options that are necessary to get C++ driver with eh and rtti.

As a curious fact this RTL was projected after the first NT driver written by me was started as try{}catch(…){} in the DriverMain() and instantly had led to the ‘unresolved symbols’ you’ve mentioned :wink:

Hmmm… that seems a bit problematic. You might want to rethink
incompatibility with the only supported kernel compiler tool-chain.

Mark Roddy

On Mon, Nov 11, 2013 at 2:39 PM, wrote:

> > /kernel, which is the default for compiling drivers in WDK 8 and 8.1
> does not
> > allow the use of C++ exceptions or RTTI at compile time (as opposed to
> previous
> > failures at link time due to unresolved symbols)
>
>
> It appears not to be a problem at all because of the RTL build system
> doesn’t use the WDK-build nor the MSBuild nor the VS projects/solutions and
> is completely custom and based on gnumake. It sets up the
> compiler/lib/linker options that are necessary to get C++ driver with eh
> and rtti.
>
> As a curious fact this RTL was projected after the first NT driver written
> by me was started as try{}catch(…){} in the DriverMain() and instantly
> had led to the ‘unresolved symbols’ you’ve mentioned :wink:
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

>

> Also note: C++ standard RTTI is pathetic kludge, and thus is usually
> replaced
> with better hand-made stuff, like DECLARE_DYNAMIC in MFC or
> OSDynamicCast in
> IOKit.

You seem to have a very, very confused grasp of reality. DECLARE_DYNAMIC
and friends were not “added” because “RTTI is a pathetic kludge”. They
were CREATED because RTTI ***DID NOT EXIST***.

RTTI was added more than 15 years after DECLARE_DYNAMIC was created. Its
behavior is slightly different, and replacing it with an RTTI
implementation would break many existing MFC programs. So we’re stuck
with it.
joe

It looks you are excessively strict about RTTI. The RTTI on my opinion is
just a low level crutch exposed by statically-typed language for some
run-time limited purposes and it does it’s duties well enough.
As well let’s just recall that when OSDynamicCast appeared the C++ was
being in a slightly chaotic state. Moreover OSDynamicCast provides not
just the RTTI in the C++ terms but also it exposes some kind of
reflection.


NTDEV is sponsored by OSR

Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev

OSR is HIRING!! See http://www.osr.com/careers

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> Hmmm… that seems a bit problematic. You might want to rethink

incompatibility with the only supported kernel compiler tool-chain.

it seems currently to be no reasons for that.

If you mean as the problem the ‘/kernel’-switch - so well, it seems to be just a mirror and smoke screen preventing developers from using the officially unsupported stuff. How can it be used in our particular case if the project intentionally ignores this prevention ?

Let’s look do the custom build system appear to be a problem ?
The currently released WDK8.1 and the early released WDK8.0 are not complete stand-alone solution for a driver development. Roughly speaking they represent only headers and libraries sets. There are no more compilers/linkers/build. The new WDKs require a VisualStudio where the wizards/projects/solutions to be used. All this stuff is rather annoying and is a great overkill for the small strange project like this kernel RTL. If MS would change mind and will release the modern human-kind WDK with complete toolset for the simple guys without root digital certificates, a great production code base and others black-jacks then it would be worth to write the ‘wdk-build’ scripts.

But now using the custom build system you can build everything in the project just using gnumake and one of the supported DDK/WDK:
ddk2600, ddk3790sp1, wdk6.1sp1, wdk7.1.0, wdk8.0+msvc2012.
and the wdk8.1+msvc2013 is planned to be added in some future.

If you want developers to adopt use of your library you really should
support building within the MSFT defined tool-chain. As is this is a
non-starter for almost everyone building commercial products. You might
find this unreasonable, but it will be a barrier to adoption.

Mark Roddy

On Mon, Nov 11, 2013 at 4:08 PM, wrote:

> > Hmmm… that seems a bit problematic. You might want to rethink
> > incompatibility with the only supported kernel compiler tool-chain.
>
> it seems currently to be no reasons for that.
>
> If you mean as the problem the ‘/kernel’-switch - so well, it seems to be
> just a mirror and smoke screen preventing developers from using the
> officially unsupported stuff. How can it be used in our particular case if
> the project intentionally ignores this prevention ?
>
> Let’s look do the custom build system appear to be a problem ?
> The currently released WDK8.1 and the early released WDK8.0 are not
> complete stand-alone solution for a driver development. Roughly speaking
> they represent only headers and libraries sets. There are no more
> compilers/linkers/build. The new WDKs require a VisualStudio where the
> wizards/projects/solutions to be used. All this stuff is rather annoying
> and is a great overkill for the small strange project like this kernel RTL.
> If MS would change mind and will release the modern human-kind WDK with
> complete toolset for the simple guys without root digital certificates, a
> great production code base and others black-jacks then it would be worth to
> write the ‘wdk-build’ scripts.
>
> But now using the custom build system you can build everything in the
> project just using gnumake and one of the supported DDK/WDK:
> ddk2600, ddk3790sp1, wdk6.1sp1, wdk7.1.0, wdk8.0+msvc2012.
> and the wdk8.1+msvc2013 is planned to be added in some future.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

Agreed, working within the current MSFT defined tool-chain is a must. Anything else is of no interest to me. Logo compliance is a must. It was heavy handed that out of the blue Microsoft decided they must make /kernel exclusive of EH. Someone clearly had an agenda there as everyone has known for years that EH is the future of the kernel, not if but when and yours is not the first such EH library. Perhaps Microsoft wants to be the only ones to roll this out which is strange since they still don’t bother defining new & delete even today. Hopefully library developers like yourself can find a workaround to this.

And why are you talking about antiquated boost things? We are approaching the year 2014 so get on board with c++11 and std::bind, std::thread and don’t even mention legacy things.

There is really no limit to the practicality of the standard libraries in the kernel.
> It seems to be a very optimistic statement -).

It is 100% true. My post gave a flavor of this; std::wstring, std::list and std::thread are just the beginning of the incredibly useful library features that blow away code without them.

I made /kernel happen…so blame me.if you want. It has nothing to do with excluding others from a potential market, rather it is codifying the informal rules as formal. EH is not the future in the kernel as far as MSFT is concerned, the danger of misuse and corrupting system state is great. If that changes, we will update the flag to allow it as appropriate

d

Bent from my phone


From: xxxxx@gmail.commailto:xxxxx
Sent: ?11/?11/?2013 4:22 PM
To: Windows System Software Devs Interest Listmailto:xxxxx
Subject: RE:[ntdev] C++ RTL for NT kernel-mode drivers

Agreed, working within the current MSFT defined tool-chain is a must. Anything else is of no interest to me. Logo compliance is a must. It was heavy handed that out of the blue Microsoft decided they must make /kernel exclusive of EH. Someone clearly had an agenda there as everyone has known for years that EH is the future of the kernel, not if but when and yours is not the first such EH library. Perhaps Microsoft wants to be the only ones to roll this out which is strange since they still don’t bother defining new & delete even today. Hopefully library developers like yourself can find a workaround to this.

And why are you talking about antiquated boost things? We are approaching the year 2014 so get on board with c++11 and std::bind, std::thread and don’t even mention legacy things.

> There is really no limit to the practicality of the standard libraries in the kernel.
>> It seems to be a very optimistic statement -).

It is 100% true. My post gave a flavor of this; std::wstring, std::list and std::thread are just the beginning of the incredibly useful library features that blow away code without them.


NTDEV is sponsored by OSR

Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev

OSR is HIRING!! See http://www.osr.com/careers

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer</mailto:xxxxx></mailto:xxxxx>

> RTTI was added more than 15 years after DECLARE_DYNAMIC was created.

Simultaneously in early 1990ies.

No one used C++ standard stuff, just because the layout of the RTTI object was too pathetic and there was no virtual constructor.

DECLARE_DYNAMIC was just better.

Nevertheless, RTTI is still needed for C++ to do EH: exceptions are classes inherited from one another.

Anyway these are all kludges: the full and proper RTTI support can occur in managed code only, like the WinRT-extended C++ :slight_smile:


Maxim S. Shatskih
Microsoft MVP on File System And Storage
xxxxx@storagecraft.com
http://www.storagecraft.com

> Nevertheless, RTTI is still needed for C++ to do EH: exceptions are classes

inherited from one another.

It looks to be not a case. The standard says on the contrary: the rtti operations can rise exceptions. All the rest interdependance between RTTI and EH is an internal specific of a compiler. E.g. in the case of the MS C++ the EH work well even if you’ve RTTI turned off - the compiler still generates the type-descriptor tables concerned with types involved into the EH-process. And note those tables are not the complete RTTI type-describing structures.
Well, several years ago the gcc had a problem - when the RTTI was being turned off while compiling the EH became broken - that was a bug.

Anyway these are all kludges: the full and proper RTTI support can occur in
managed code only, like the WinRT-extended C++

On my opinion the C++ currently doesn’t pretend to be object-oriented in terms as do the C#, Java or ObjC. The C++ is just an instrument to build your own OO-framework, and the RTTI capability is good enough as a part of the language but certainly may not fulfil your framework requirements.

If you want developers to adopt use of your library you really should
support building within the MSFT defined tool-chain. As is this is a
non-starter for almost everyone building commercial products. You might
find this unreasonable, but it will be a barrier to adoption.

Agreed, working within the current MSFT defined tool-chain is a must.
Anything else is of no interest to me. Logo compliance is a must.

Ok, yes. I’ll consider to supply VC2012 solution with the project for convenience.
But the ‘/kernel’ compiler switch is a kind of a mutual exclusion - it just can not be used to build a driver using EH and RTTI -).

It hardly beleived this RTL can be considered as a part of some commercial product in some future. Let’s just look at the current state of the project:

  1. this is entirely a private investigative initiative;
  2. a whole bunch of undocumented or poorly-observed or reverse-engineered technologies is used,
  3. the officially unsupporetd or unrecommended facilities are used,
  4. initially released with the intention to be of an interest to the enthusiast of the C++ using in the NT kernel environment.
    Although the license is permittive enough for the code to be used in a wide range of side projects including commercial, someone intending to involve this code in his buisness need to be strongly ensured what does he do, because all profits and risks are to be taken on his own. The RTL author is only competent to answer the questions about internal layout of the library and supplied tools. :slight_smile:

And why are you talking about antiquated boost things?
We are approaching the year 2014 so get on board with
c++11 and std::bind, std::thread and don’t even
mention legacy things.

Oh, boost::bind and boost::function are never considred as antiquated IMO. Their std::-twins just resembles the experience of the corresponding boost facilities.

It is 100% true. My post gave a flavor of this; std::wstring,
std::list and std::thread are just the beginning of the
incredibly useful library features
that blow away code without them.

These are a kind of a good, but the great efforts are to be released for this to become real in the kernel.

It seems that every few years somebody surfaces with a new one of these things. Didn’t we see something like this, with one particular dev posting with great fervor, a few years back?

While cute, and an interesting personal learning exercises, they’re never suitable for commercial products.

If you want to use C++ for driver development (I think that’s too bad, but) you have whatever Microsoft now supports.

If you want to use a proper OO language with rich run-time support, pray for the day that we can use managed code in kernel-mode… and don’t expect a solution soon.

Peter
OSR

> If you want to use a proper OO language with rich run-time

support, pray for the day that we can use managed code
in kernel-mode… and don’t expect a solution soon.

It’s will be great… and the whole ‘dbg_console’ instead the current DbgPrint() to write there commands and scripts for the kebash and kepython -)

It seems that every few years somebody surfaces with
a new one of these things.
Didn’t we see something like this, with one particular
dev posting with great fervor, a few years back?

And the people who try to use C++ in the kernel and get stuck bad are occur a bit more often. And it supposed to be a good deal if those people are able to get the code, see it and make a decision a kind of “oh, EH in kernel is a peace of junk indeed” or “hmm, maybe it worth try for something”, isn’t it ?
If you mean ‘one particular dev’ Peter Hurley, than i’v seen his library with a great interest.

While cute, and an interesting personal learning exercises,
they’re never suitable for commercial products.

Yes, it was certainly the very interesting development.
And now every person got this code can make decision about it’s using on her own preferences. Is there something wrong ? If some code exists it’s a bit better than otherwise. -)

Wrong? Not so very wrong, no.

See… I’ve personally never believed that. That’s one reason why I’ve never been an adherent of the OSS movement. “Here’s a lot of code that, while it works very well in some aspects, is shit in other aspects. Please figure out for yourself which parts of the code are which. But it’s free and you can use it. And we had fun writing the parts that we wrote. You’re welcome.”

From my own little viewpoint, having code available to the developers that is problematic and/or does not follow best practices, is worse than having no code available. This type of code is just a big hole for unsuspecting devs to fall into.

It’s hard enough to write Windows drivers without having to deal with additional unknowns.

This isn’t meant as a specific criticism of your endeavour. Rather, it’s my view on the whole genre.

Peter
OSR

> If you want to use C++ for driver development (I think that’s too bad, but) you have whatever

Microsoft now supports.

Stack size is one of the issues.

If somebody will really port STL/boost to kmode - then good luck deal with all these by-value objects.


Maxim S. Shatskih
Microsoft MVP on File System And Storage
xxxxx@storagecraft.com
http://www.storagecraft.com

>From my own little viewpoint, having code available to the developers that
is problematic and/or does not follow best practices, is worse than having
no code available. This type of code is >just a big hole for unsuspecting
devs to fall into.

Tru dat.

And if it’s a commonly available bad example, it takes on a life of its own
after a while.

mm

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@osr.com
Sent: Tuesday, November 12, 2013 9:43 AM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] C++ RTL for NT kernel-mode drivers

Wrong? Not so very wrong, no.

[quote]
If some code exists it’s a bit better than otherwise. -) [/quote]

See… I’ve personally never believed that. That’s one reason why I’ve
never been an adherent of the OSS movement. “Here’s a lot of code that,
while it works very well in some aspects, is shit in other aspects. Please
figure out for yourself which parts of the code are which. But it’s free
and you can use it. And we had fun writing the parts that we wrote. You’re
welcome.”

From my own little viewpoint, having code available to the developers that
is problematic and/or does not follow best practices, is worse than having
no code available. This type of code is just a big hole for unsuspecting
devs to fall into.

It’s hard enough to write Windows drivers without having to deal with
additional unknowns.

This isn’t meant as a specific criticism of your endeavour. Rather, it’s my
view on the whole genre.

Peter
OSR


NTDEV is sponsored by OSR

Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev

OSR is HIRING!! See http://www.osr.com/careers

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

On 11/12/2013 5:43 PM, xxxxx@osr.com wrote:

See… I’ve personally never believed that. That’s one reason why I’ve never been an adherent of the OSS movement. “Here’s a lot of code that, while it works very well in some aspects, is shit in other aspects. Please figure out for yourself which parts of the code are which. But it’s free and you can use it. And we had fun writing the parts that we wrote. You’re welcome.”

Isn’t it more like “Here’s a lot of code. If it doesn’t work we’ll do
our best to fix it if we have the time and energy, otherwise you have
the source to figure out a fix or workarounds yourself”?


Bruce Cran

On Tue, Nov 12, 2013 at 1:18 PM, Bruce Cran wrote:

> On 11/12/2013 5:43 PM, xxxxx@osr.com wrote:
>
>> See… I’ve personally never believed that. That’s one reason why I’ve
>> never been an adherent of the OSS movement. “Here’s a lot of code that,
>> while it works very well in some aspects, is shit in other aspects. Please
>> figure out for yourself which parts of the code are which. But it’s free
>> and you can use it. And we had fun writing the parts that we wrote.
>> You’re welcome.”
>>
>
> Isn’t it more like “Here’s a lot of code. If it doesn’t work we’ll do our
> best to fix it if we have the time and energy, otherwise you have the
> source to figure out a fix or workarounds yourself”?

As I work with an open source team of developers, this isn’t really what is
going on. “Works” modulo the usual bugs in any complex software product, is
on one branch, “Works for shit but has new features” is on another, the
development branch. We get to leverage the efforts of many and in general
the release branches of the big projects (Linux for example) are rather
high quality. The dev branches not so much, but that is a known risk
factor.

That is the good news. The real problem is the churn in the release branch.
Because the dev branch is loose and has people adding cool features etc.
shit that works just fine regularly gets re-implemented for arbitrary
reasons. Then that stuff moves to the release branch and people using that
branch to add value via their own products get to deal with the arbitrary
churn.

More specifically people with patch queues on release branches frequently
get to toss all their nice changes and enhancements out and re-implement
them, refix re-introduced bugs, etc. all over again. And again. And again.
And fight with the reigning authorities of this product or that product to
get their conflicting features into mainline and out of patch status rather
than somebody else’s conflicting features.

Mark Roddy

Generally, I avoid free software for one important reason: I can’t afford it.

Nobody would pay me to fix a bug i gcc. And can I really ship a product
to a client who can’t recompile te source without my modified compier.

I worked for many years with an optimizing compiler. It had not achieved
“critical mass” (a software entity is said to have reached “critical mass”
when fixing one bug introduces 1+ epsilon bugs), but there was a high
probability that fixing one bug would either introduce or expose one other
bug; it was in delicate balance.

As a consequence, we could never recompile our operating system from
source overnight; formerly-working components of the OS would likely fail
because some ew bug would be found. It was a regular occurrence that
while working on component X that a point release/hotfix equivalent of the
compier would itroduce a bug in formerly-working code. Because we worked
closely with the compiler writers, and many (like me) were already
compiler eeks, we would get te details of what the “root cause” was, for
example, certain kinds of potential aliasing would not be detected as
invalidating a common subexpression, so a “stale” result might be reused.
Tweaking the guts of an optimizing compiler is not for te faint of heart.

So if you find that one of the parts of your open-source component is
broken, yes, feel free to fix it yourself. And if the basement of this
new house you are considering is too small, feel free to just dig a
subbasement. Don’t worry if you don’t understand basic structural
principles, hey, the worst that can happen is that your house can
collapse.

This is not to say that all of Windows is beautiful code, exemplary in
every way of Best Practice. But I’ve seen horrible code, and when I
criticized it, I was told “But I learned how to do that from open-source system here>, and that’s the clever [editorial comment here:
add incomprehensible and unmaintainable] way it’s done there”. Sort of
the same crappy quality of many MSDN examples, whose quality has doomed
several real projects; I saw one $500,000 investment fail because the
author used te MSDN multi-threaded async socket example code (other than
the example got networking wrong, threading wrong, and synchronization
wrong, there was nothing wrong with the example).

Even the WDM examples were often poor examples, since there were huge
numbers of “clever” tricks used that resulted in unmaintainable code
(IoSkipCurrentIrpStackLocation hacks, for example, that improved
performance on a 386 but had no noticeable effect on Pentium 4+ machines)
or left readers confused as to why it was really done. And the horrors
of WDM power management are the best argument for KMDF I have ever seen.

Visible source does not work as well as the free-open-source people like
to believe.

And the other issue is te GNU license, which is one of the biggest
barriers to code-sharing that ever existed, and I think has done far more
harm than good. My clients didn’t want anything with a GNU license
anywhere near their product code; if they could have kept every trace of
such code off my machine (fearing cross-contamination) they would have
done so. Nobody ever lost by underestimating the paranoia of corporate
legal staffs. Or their ability to overreact to the phrase “open source”
(never mind that BSD, Boost, CodeProject, and Creative Commons have sane
license: all open-source is Evil).

Sorry to head to flame-war, Peter, but you did open the door… (move
reactions to nttalk, which I don’t get, and therefore will be relieved of
any compulsion to reply)
joe

>


>
> Wrong? Not so very wrong, no.
>
>


>
> See… I’ve personally never believed that. That’s one reason why I’ve
> never been an adherent of the OSS movement. “Here’s a lot of code that,
> while it works very well in some aspects, is shit in other aspects.
> Please figure out for yourself which parts of the code are which. But
> it’s free and you can use it. And we had fun writing the parts that we
> wrote. You’re welcome.”
>
> From my own little viewpoint, having code available to the developers that
> is problematic and/or does not follow best practices, is worse than having
> no code available. This type of code is just a big hole for unsuspecting
> devs to fall into.
>
> It’s hard enough to write Windows drivers without having to deal with
> additional unknowns.
>
> This isn’t meant as a specific criticism of your endeavour. Rather, it’s
> my view on the whole genre.
>
> Peter
> OSR
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

Well… I guess it depends. Isn’t it really “Here’s a lot of code. If it doesn’t work we’ll FIX IT IF WE HAPPEN TO FIND THAT PART OF THE PROJECT AMUSING AND OF USE TO US otherwise you have
the source to figure out a fix or workarounds yourself”?

And thus, in many cases, you’d be better off starting from the beginning yourself.

Give me a closed source process, where engineering discipline is enforced during development, ANYday.

Mr. Roddy addresses the “big project” OSS world. This isn’t the flavor I was specifically thinking of… rather, I was thinking of more “personal-sized” OSS projects… such as that by the OP who said “better some software than no software”

There are many different flavors of OSS, and they each have their own particular attributes:

  • The “big project” world that Mr. Roddy addressed.

-The “personal-sized” OSS projects like that of the OP who effectively said “better some software than no software.” This is the flavor I was specifically addressing.

  • The “consortium-based” OSS project, where an industry group gets together to build a reference model of something.

Each flavor is its own special world. I’m sure there are other flavors.

Again, I was speaking really to the “personal-project sized” OSS flavor of project, like the OPs. Stuff like Linux, well… I can’t hardly comment on that with any experience. For that I’d have to wait for Anton to tell me what to say. For the “consortium-based” OSS projects… I really don’t have anything positive at all to say except that it is an excellent activity for somebody who wants to exercise their political strategy skills.

Peter
OSR

On 11/12/2013 8:24 PM, xxxxx@osr.com wrote:

Well… I guess it depends. Isn’t it really “Here’s a lot of code. If it doesn’t work we’ll FIX IT IF WE HAPPEN TO FIND THAT PART OF THE PROJECT AMUSING AND OF USE TO US otherwise you have
the source to figure out a fix or workarounds yourself”?

And thus, in many cases, you’d be better off starting from the beginning yourself.

Give me a closed source process, where engineering discipline is enforced during development, ANYday.

To see the sort of discipline involved in Linux-type projects, take a
look at http://www.freebsd.org/releases/10.0R/schedule.html . In a
previous job at a large tech company I was left running a continuous
integration server under my desk while people sent check-in notices
manually via email and argued they didn’t have time to write unit
tests. In contrast FreeBSD runs a tinderbox continuously, and people
care enough about the project as a whole that they make efforts to do
things properly.

On the other hand, I took over a Windows driver a few years ago, made
some bug fixes and a few releases - and haven’t had time to keep it up.
I know it has some major bugs and I have people offering me money to fix
it, but I simply don’t have time to work on it. For the people who use
it it’s little better than a closed-source product from a company that’s
gone out of business, since few people have the knowledge to take it
over and fix it.


Bruce Cran