:“Mike Kemp” wrote in message news:xxxxx@ntdev…
> (a) concur with Bill McKenzie. The driver examples need to be complete and
> correct. Can I suggest that MS contract some of the experts here on this
> site to review every single example to make sure they all comply with
> current best practice, and that all the major technologes are present. (I
> had to learn the hard way that the 1394 examples in WDM were actually
> examples of how not to write a driver!)
While I would love to see good samples, and I keep lobbying for samples that
just cleanly pass the various tools such as /Wall and PreFAST (with no
filtering), good samples are hard work.
First, unless it is a driver Microsoft owns, they are either going to have
negotiate to release it, or write one for a given piece of hardware.
Second, the driver even if it is widely used may not be up to “sample
standard” i.e. uses obsolete calls, skimps on recommended practice, or takes
shortcuts because of the hardware that you do not want in a sample. So to
upgrade the samples you are asking for a lot of money from Microsoft. The
question is this the best use, or improving other parts of the environment.
This does not even consider how hard it is for some classes of device to get
a good, well documented, low cost sample device. Consider that until OSR
created its USB card, you were looking at big bucks for a sample USB device.
I had hoped the Device Simulation Framework would help here, but that is
such an abortion it and the developers should be scrapped.
Finally don’t nessecarily blame the developers here. If you have not dealt
with Microsoft legal, you do not know what paranoid is. Remember, these are
the guys who have blocked the source code for KMDF, now you want them to
support Microsoft “giving away” code that can make it easy to clone a piece
of hardware or make a commercial driver for a competing device?
I’ve been lobbying Microsoft:
http://msmvps.com/blogs/windrvr/archive/2008/01/18/fixing-winhec.aspx
http://msmvps.com/blogs/windrvr/archive/2008/02/16/fixing-winhec-part-2.aspx
That they need a good conference where the developer community can identify
its needs. Good samples are one of them, there are a lot more.
–
Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply
If they can’t provide good samples, which I find terribly hard to believe
having worked on driver framework products with samples for years, then at
least document some of the stuff that people ask over and over.
And btw, whoever it was that said no one should be messing around in the
kernel…I could not disagree more!!
The whole point I am trying to make here is that messing around in the
kernel should be a LOT more straight forward. I think Microsoft too has the
mindset that they are the only ones who should be messing around in the
kernel and that just isn’t reality. Windows is a vehicle. It isn’t the end
game, it is the enabler for people to bring great technologies to bear on
the PC. Thus it has to be extensible. There is no way you can move most
drivers to user-mode today. It just won’t work for most technologies. The
performance requirements generally won’t allow it.
So, just make kernel development less painful. I mean, I have been working
in the Windows kernel for over 10 years and I still run into tasks that take
inordinate amounts of time and “careful observation” to get done. And if
you don’t have a Dell or HP backing you up, good luck getting **ANY** help
from Microsoft. Even if you do have a Dell or HP behind you…good luck.
The big problem, as I see it at least, is exactly this situation. Microsoft
doesn’t see what we do as being terribly important to their bottom line. I
think this is highly short-sighted. The costs/benefits are very difficult
to pin down on kernel development, but they always are rooted there in my
experience. Stability, security, ease of use, it all starts and sometimes
ends in the kernel.
If it were me, which it never will be, but if it were, I would be highly
concerned in making sure that the Windows customer base could
extend/use/interface the OS extremely easily and safely. That requires a
LOT more resources than Microsoft is throwing at the problem today. Or
maybe a shift of resources.
Anyway, 3rd party kernel devs aren’t going away, no matter how much anyone
desires to wish them away.
Bill M.
“Don Burn” wrote in message news:xxxxx@ntdev…
> :“Mike Kemp” wrote in message news:xxxxx@ntdev…
>> (a) concur with Bill McKenzie. The driver examples need to be complete
>> and correct. Can I suggest that MS contract some of the experts here on
>> this site to review every single example to make sure they all comply
>> with current best practice, and that all the major technologes are
>> present. (I had to learn the hard way that the 1394 examples in WDM were
>> actually examples of how not to write a driver!)
>
> While I would love to see good samples, and I keep lobbying for samples
> that just cleanly pass the various tools such as /Wall and PreFAST (with
> no filtering), good samples are hard work.
>
> First, unless it is a driver Microsoft owns, they are either going to have
> negotiate to release it, or write one for a given piece of hardware.
> Second, the driver even if it is widely used may not be up to “sample
> standard” i.e. uses obsolete calls, skimps on recommended practice, or
> takes shortcuts because of the hardware that you do not want in a sample.
> So to upgrade the samples you are asking for a lot of money from
> Microsoft. The question is this the best use, or improving other parts of
> the environment.
>
> This does not even consider how hard it is for some classes of device to
> get a good, well documented, low cost sample device. Consider that until
> OSR created its USB card, you were looking at big bucks for a sample USB
> device. I had hoped the Device Simulation Framework would help here, but
> that is such an abortion it and the developers should be scrapped.
>
> Finally don’t nessecarily blame the developers here. If you have not
> dealt with Microsoft legal, you do not know what paranoid is. Remember,
> these are the guys who have blocked the source code for KMDF, now you want
> them to support Microsoft “giving away” code that can make it easy to
> clone a piece of hardware or make a commercial driver for a competing
> device?
>
> I’ve been lobbying Microsoft:
>
> http://msmvps.com/blogs/windrvr/archive/2008/01/18/fixing-winhec.aspx
> http://msmvps.com/blogs/windrvr/archive/2008/02/16/fixing-winhec-part-2.aspx
>
> That they need a good conference where the developer community can
> identify its needs. Good samples are one of them, there are a lot more.
>
>
> –
> Don Burn (MVP, Windows DDK)
> Windows 2k/XP/2k3 Filesystem and Driver Consulting
> Website: http://www.windrvr.com
> Blog: http://msmvps.com/blogs/WinDrvr
> Remove StopSpam to reply
>
>
>
Bill McKenzie wrote:
…
And btw, whoever it was that said no one should be messing around in the
kernel…I could not disagree more!!The whole point I am trying to make here is that messing around in the
kernel should be a LOT more straight forward. I think Microsoft too has the
mindset that they are the only ones who should be messing around in the
kernel and that just isn’t reality. Windows is a vehicle. It isn’t the end
game, it is the enabler for people to bring great technologies to bear on
the PC. Thus it has to be extensible. There is no way you can move most
drivers to user-mode today. It just won’t work for most technologies. The
performance requirements generally won’t allow it.
If only things were this simple.
The people on this mailing list have a very peculiar slant into the
Windows world. We work in the dirty underbelly, where the rubber meets
the road. We want it to be easy to do the things we need to do.
But we are NOWHERE near typical, and it’s awfully easy to lose sight of
this. Accurate to at least three decimal places, 100% of the Windows
users in the world are corporate users who just need a tool to run
Outlook and Office. Kernel coding is irrelevant, because their
corporate IT departments won’t let them plug in our cool products
anyway. They just want something they can turn on in the morning and
use to produce another day’s worth of paperwork.
That’s where Microsoft’s money comes from, and they are perfectly
correct in concentrating the vast majority of their efforts on keeping
those people happy.
We are irrelevant. Opinionated and noisy, yes, but irrelevant nonetheless.
So, just make kernel development less painful. I mean, I have been working
in the Windows kernel for over 10 years and I still run into tasks that take
inordinate amounts of time and “careful observation” to get done.
Be careful what you wish for; this paragraph would make a good bullet
point argument for DDK.NET.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
> Be careful what you wish for; this paragraph would make a good bullet
> point argument for DDK.NET.
So very true.
mm
Tim Roberts wrote:
Bill McKenzie wrote:
> …
> And btw, whoever it was that said no one should be messing around in
> the kernel…I could not disagree more!!
>
> The whole point I am trying to make here is that messing around in the
> kernel should be a LOT more straight forward. I think Microsoft too
> has the mindset that they are the only ones who should be messing
> around in the kernel and that just isn’t reality. Windows is a
> vehicle. It isn’t the end game, it is the enabler for people to bring
> great technologies to bear on the PC. Thus it has to be extensible.
> There is no way you can move most drivers to user-mode today. It just
> won’t work for most technologies. The performance requirements
> generally won’t allow it.
>If only things were this simple.
The people on this mailing list have a very peculiar slant into the
Windows world. We work in the dirty underbelly, where the rubber meets
the road. We want it to be easy to do the things we need to do.But we are NOWHERE near typical, and it’s awfully easy to lose sight of
this. Accurate to at least three decimal places, 100% of the Windows
users in the world are corporate users who just need a tool to run
Outlook and Office. Kernel coding is irrelevant, because their
corporate IT departments won’t let them plug in our cool products
anyway. They just want something they can turn on in the morning and
use to produce another day’s worth of paperwork.That’s where Microsoft’s money comes from, and they are perfectly
correct in concentrating the vast majority of their efforts on keeping
those people happy.We are irrelevant. Opinionated and noisy, yes, but irrelevant nonetheless.
> So, just make kernel development less painful. I mean, I have been
> working in the Windows kernel for over 10 years and I still run into
> tasks that take inordinate amounts of time and “careful observation”
> to get done.Be careful what you wish for; this paragraph would make a good bullet
point argument for DDK.NET.
> Be careful what you wish for; this paragraph would make a good bullet
point argument for DDK.NET.
I am afraid this is exactly where the wind blows. - after all, MSFT research wrote the whole OS(!!!) in C#…
Anton Bassov
> That’s where Microsoft’s money comes from, and they are perfectly correct
in concentrating the vast majority of their efforts on keeping those
people happy.
I don’t know of a single major corporation in existance today that is not
using 3rd party software on virtually every one of its PCs AND THUS running
a custom driver of one flavor or another – generally the vast undocumented
world of file system filter drivers for anti-virus. Not one company that I
know of is not using Symantec, McAfee or some other virus scanner, or
encryption product, or combination of these. I just really don’t know of
any companies that aren’t?? So, while it may seem that revenue and drivers
are not connected…they very much are. With the rate at which virus
scanning software is failing and causing stability issues in Windows, I
cannot believe they aren’t making this connection. But, I guess blatant and
obvious facts don’t usually get in anyone’s way in the high tech world.
Additionally, the lack of available information on the OS interfaces causes
companies to incur costs, which never get measured by the way, via lost
revenue (due to inability to ship or ship on time) and needless R&D. And
for what reason? Documentation is too hard? Don’t want to release
specifics that already leaked out with Win2K SP1? I stay employed so its
all good, but I just don’t get the rationale. But hey, I took logic in
college
Be careful what you wish for; this paragraph would make a good bullet
point argument for DDK.NET.
Very good point. My statements are directed at trying to get Microsoft to
loosen the belt on information related to architecture and primarily
INTERFACES!! Not to get Microsoft to try to solve my problems, which they
generally seem to have no clue about, for me.
Bill M.
“Tim Roberts” wrote in message news:xxxxx@ntdev…
> Bill McKenzie wrote:
>> …
>> And btw, whoever it was that said no one should be messing around in the
>> kernel…I could not disagree more!!
>>
>> The whole point I am trying to make here is that messing around in the
>> kernel should be a LOT more straight forward. I think Microsoft too has
>> the mindset that they are the only ones who should be messing around in
>> the kernel and that just isn’t reality. Windows is a vehicle. It isn’t
>> the end game, it is the enabler for people to bring great technologies to
>> bear on the PC. Thus it has to be extensible. There is no way you can
>> move most drivers to user-mode today. It just won’t work for most
>> technologies. The performance requirements generally won’t allow it.
>>
>
> If only things were this simple.
>
> The people on this mailing list have a very peculiar slant into the
> Windows world. We work in the dirty underbelly, where the rubber meets
> the road. We want it to be easy to do the things we need to do.
>
> But we are NOWHERE near typical, and it’s awfully easy to lose sight of
> this. Accurate to at least three decimal places, 100% of the Windows
> users in the world are corporate users who just need a tool to run Outlook
> and Office. Kernel coding is irrelevant, because their corporate IT
> departments won’t let them plug in our cool products anyway. They just
> want something they can turn on in the morning and use to produce another
> day’s worth of paperwork.
>
> That’s where Microsoft’s money comes from, and they are perfectly correct
> in concentrating the vast majority of their efforts on keeping those
> people happy.
>
> We are irrelevant. Opinionated and noisy, yes, but irrelevant
> nonetheless.
>
>> So, just make kernel development less painful. I mean, I have been
>> working in the Windows kernel for over 10 years and I still run into
>> tasks that take inordinate amounts of time and “careful observation” to
>> get done.
>
> Be careful what you wish for; this paragraph would make a good bullet
> point argument for DDK.NET.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
Bill McKenzie wrote:
And btw, whoever it was that said no one should be messing around in
the kernel…I could not disagree more!! The whole point I am trying
to make here is that messing around in the kernel should be a LOT
more straight forward.
To add my 2 cents:
Don’t forget, that no hardware manufacturer really wants to write a
driver for a new device. It’s just not the core business!
So how shall we get properly working drivers if not with excellent
support, a virtually free WDK, and good samples?
(Same in fact with software drivers for e.g. virus scanners - the driver
is necessary to achieve the goal, not a goal per se.)
IMO in many cases it should not be necessary to “be messing around in
the kernel”, because most things should just be system services, and
have user-mode interfaces.
But if unfortunately it is necessary, then - yes - it should be made
as easy and simple as possible to do it.
There is no way you can move most drivers to user-mode today. It
just won’t work for most technologies. The performance requirements
generally won’t allow it.
Correct. Example: I have a working UMDF driver here (because it doesn’t
need any code signing) for Vista64/32 and XP.
But I am deploying a signed(!) WDM driver instead, because (a) the UMDF
driver is 10-20% slower, and (b) UMDF does not run on all Windows
platforms we have to support.
Some “black-boxes” at our customer sites use pretty old PCs in tested
and approved configuration. They are not ours, and we can’t upgrade them
and their OS version just because UMDF would be so much nicer.
So, just make kernel development less painful. I mean, I have been
working in the Windows kernel for over 10 years and I still run into
tasks that take inordinate amounts of time and “careful observation”
to get done. And if you don’t have a Dell or HP backing you up, good
luck getting **ANY** help from Microsoft. Even if you do have a
Dell or HP behind you…good luck.
The uneasy feeling I have with this “commercial OS software” is that in
case something really breaks, I could be completely lost.
As opposed to open source software, where in the worst case you can
download the source packages and see for yourself how it’s supposed to
work. Economically perhaps not viable, but at least possible.
The big problem, as I see it at least, is exactly this situation.
Microsoft doesn’t see what we do as being terribly important to their
bottom line.
No, actually I think MS does. Perhaps not everyone from Microsoft, but
the mere existence of WHDC, Connect, WDK, MSDN blog, Channel9, etc.
shows that Microsoft realizes strongly they need driver developers,
writing drivers that support new hardware on MS operating systems.
Without working drivers, there are no usable devices.
Without device support, nobody buys an OS.
There are enough OS carcasses lying around on the 'net to prove that lesson.
On the other hand documentation and communication are an extremely
time-consuming task without any direct, measurable, benefit to a
company. Specifically from/for “system software” developers.
Combine this with the contract-CEO-driven “drive-by-shareholder-value”
management we see in the last years, and MS is actually doing pretty well.
I think this is highly short-sighted. The costs/benefits are very
difficult to pin down on kernel development, but they always are
rooted there in my experience. Stability, security, ease of use, it
all starts and sometimes ends in the kernel.
…as somebody obviously learnt the hard way and started the security
initative at Microsoft.
It must have cost them millions. (And of course it was late.)
If it were me, which it never will be, but if it were, I would be
highly concerned in making sure that the Windows customer base could
extend/use/interface the OS extremely easily and safely.
That requires a LOT more resources than Microsoft is throwing at the
problem today. Or maybe a shift of resources.
Or a shift of kernel paradigm. And it may be that this is happening, now
that multi-core CPUs are common, and virtualization is not something
only IBM does on MVS/VM.
Anyway, 3rd party kernel devs aren’t going away, no matter how much
anyone desires to wish them away.
3rd party device driver developers will not go away, because no company
is big enough to develop all of the required device drivers for the new
hardware themselves. Whether this is “kernel” mode or something else is
in the end not relevant, as long as all necessary device control tasks
can be done reasonably fast.
SCNR. -H
>>What about DMA??? <<
I’ve been worrying about this. It seems to me that if you have a secure CPU
that the OS controls, you can have a safe OS, as uncrashable as the OS
vendor wants it to be. If you then permit the addition of a hardware
component that is able to walk all over the system memory then you’ve broken
that model.
So it is not really so hard, if we want to move to a safe OS that you and I
can rely on not to crash because some overworked driver writer fell a little
short of perfection, we have to add hardware to prevent any user hardware
accessing OS memory without OS granting permission.
Then DMA can be done in user mode and have free access to “user space” in
hardware, as well as user memory etc in software.
Legacy support is not really a problem. I could see a situation where we
move towards this hardware model. All it takes is fairly small hardware spec
change and a matching OS switch. If you buy compliant hardware you can run
the OS in safe mode unless you need to install legacy addons (with their
dangerous kernel drivers). But if you only buy the new spec hardware and
software, with all drivers written in user mode, you can leave the OS in
safe mode and stay safe. I suspect the market for the dangerous legacy
hardware will fall away quickly. The new stuff should even be cheaper as
time to market for a user mode driver has got be shorter.
User mode perfomance should not be an issue. A memory write is as fast in
user mode as in kernel mode. And with blinding fast CPUs how can it be an
issue? I read here of the horrendous things people’s kerenl drivers do to
eat up resources, maybe that’s why my wordprocessor is no faster after 25
years.
M
>>>>
----- Original Message -----
From: xxxxx@hotmail.com
To: Windows System Software Devs Interest List
Sent: Monday, February 18, 2008 10:42 AM
Subject: RE:[ntdev] Do we need a “before you post” document?
it should be possible to do everything in user mode.
What about DMA??? If third-party driver sets up DMA transfer improperly, it
may well overwrite the kernel itself. Objectively, the OS cannot validate
DMA transfers that are done by the third-party drivers, because if it could,
then it could handle the target controller itself the way it does with USB
and IDE controllers, i.e. there would be no need for any third-party
assistance, in the first place. Therefore, as long as the OS does not know
everything about the controllers on the target machine, it would still
require some trusted third-party components. The best we can do here is to
move *most* third-party drivers to the UM…
Anton Bassov
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
Mike Kemp wrote:
>> What about DMA??? <<
[If] we want to move to a safe OS […] we have to add hardware to
prevent any user hardware accessing OS memory without OS granting
permission.
A memory mapper that is solely controlled by the OS (hypervisor),
between the externally accessible memory bus - the one any DMA device
has access to - and main memory would probably do the trick.
Should work also for legacy devices.
Then DMA can be done in user mode and have free access to “user
space” in hardware, as well as user memory etc in software.
You could even use old kernel mode drivers and let them run in user
space (or in a privilege level below the OS). They can do their (not
quite so) “direct I/O” and could still not overwrite vital OS tables.
> A memory mapper that is solely controlled by the OS (hypervisor),
between the externally accessible memory bus - the one any DMA device has access
to - and main memory would probably do the trick. Should work also for legacy devices.
Another option is to do everything in a software Controllers have no clue about virtual addresses - the only thing they understand is physical memory. What we can do here is just to design a generic DMA descriptor that drivers fill with *virtual* addresses, and pass it to the OS. The system will translate them into physical ones, and build a corresponding descriptor that describes physical addresses, so that the list the controller deals with will be provided by the OS, rather than by driver. If we do it this way, the OS will be able to validate all DMA transfers and ensure that they do not involve addresses that are, from the driver’s perspective, reserved…
Anton Bassov
xxxxx@hotmail.com wrote:
> A memory mapper that is solely controlled by the OS (hypervisor),
> between the externally accessible memory bus - the one any DMA
> device has access to - and main memory would probably do the trick.
> Should work also for legacy devices.
xxxxx@hotmail.com wrote:
Another option is to do everything in a software Controllers have no
clue about virtual addresses - the only thing they understand is
physical memory. What we can do here is just to design a generic DMA
descriptor that drivers fill with *virtual* addresses, and pass it to
the OS. The system will translate them into physical ones, and build
a corresponding descriptor that describes physical addresses, so that
the list the controller deals with will be provided by the OS, rather
than by driver.
If you want a really stable OS you may not rely on the 3rd party driver
playing “nice” and correctly forward your physical addresses to the
device. You have to enforce it.
If we do it this way, the OS will be able to
> validate all DMA transfers and ensure that they do not involve
> addresses that are, from the driver’s perspective, reserved…
There could still be a programming error - or a DMA hardware fault -
that leads to the wrong physical addresses being used.
No, the “enforcement” needs to be done in hardware, I’m afraid.
That’s the reason why I suggest a hardware memory-mapping device (i.e.
gets the upper address lines from e.g. PCI and maps them as ordered by
the OS to the upper physical memory address lines), so that even a DMA
chip can not work on unmapped memory directly, or can access areas that
are not permitted.
The mapping does not need to be the same as the “virtual” one of the CPU
(but much simpler), and it is in a very direct sense “physical”.
This way the user can crap up the DMA driver code or the DMA hardware
can fail, but your OS structures are still preserved.
VT-d/IOMMU. How fully implementable those are without breaking existing
software/hardware/firmware remains to be seen (to me, at least). SMM
mode also poses a sort of similar, albeit much less common, problem.
mm
Mike Kemp wrote:
>> What about DMA??? <<
I’ve been worrying about this. It seems to me that if you have a secure CPU
that the OS controls, you can have a safe OS, as uncrashable as the OS
vendor wants it to be. If you then permit the addition of a hardware
component that is able to walk all over the system memory then you’ve
broken
that model.So it is not really so hard, if we want to move to a safe OS that you and I
can rely on not to crash because some overworked driver writer fell a
little
short of perfection, we have to add hardware to prevent any user hardware
accessing OS memory without OS granting permission.Then DMA can be done in user mode and have free access to “user space” in
hardware, as well as user memory etc in software.Legacy support is not really a problem. I could see a situation where we
move towards this hardware model. All it takes is fairly small hardware
spec
change and a matching OS switch. If you buy compliant hardware you can run
the OS in safe mode unless you need to install legacy addons (with their
dangerous kernel drivers). But if you only buy the new spec hardware and
software, with all drivers written in user mode, you can leave the OS in
safe mode and stay safe. I suspect the market for the dangerous legacy
hardware will fall away quickly. The new stuff should even be cheaper as
time to market for a user mode driver has got be shorter.User mode perfomance should not be an issue. A memory write is as fast in
user mode as in kernel mode. And with blinding fast CPUs how can it be an
issue? I read here of the horrendous things people’s kerenl drivers do to
eat up resources, maybe that’s why my wordprocessor is no faster after 25
years.M
>>>>>
----- Original Message ----- From: xxxxx@hotmail.com
To: Windows System Software Devs Interest List
Sent: Monday, February 18, 2008 10:42 AM
Subject: RE:[ntdev] Do we need a “before you post” document?> it should be possible to do everything in user mode.
What about DMA??? If third-party driver sets up DMA transfer improperly, it
may well overwrite the kernel itself. Objectively, the OS cannot validate
DMA transfers that are done by the third-party drivers, because if it
could,
then it could handle the target controller itself the way it does with USB
and IDE controllers, i.e. there would be no need for any third-party
assistance, in the first place. Therefore, as long as the OS does not know
everything about the controllers on the target machine, it would still
require some trusted third-party components. The best we can do here is to
move *most* third-party drivers to the UM…Anton Bassov
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminarsTo unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
>With the rate at which virus
> scanning software is failing and causing stability issues in Windows,
> I cannot believe they aren’t making this connection.
I agree with you that documenting these things would be a good idea, but
they have made the connection - AV’s don’t exist on Vista/Longhorn.
mm
Bill McKenzie wrote:
> That’s where Microsoft’s money comes from, and they are perfectly correct
> in concentrating the vast majority of their efforts on keeping those
> people happy.I don’t know of a single major corporation in existance today that is not
using 3rd party software on virtually every one of its PCs AND THUS running
a custom driver of one flavor or another – generally the vast undocumented
world of file system filter drivers for anti-virus. Not one company that I
know of is not using Symantec, McAfee or some other virus scanner, or
encryption product, or combination of these. I just really don’t know of
any companies that aren’t?? So, while it may seem that revenue and drivers
are not connected…they very much are. But, I guess blatant and
obvious facts don’t usually get in anyone’s way in the high tech world.Additionally, the lack of available information on the OS interfaces causes
companies to incur costs, which never get measured by the way, via lost
revenue (due to inability to ship or ship on time) and needless R&D. And
for what reason? Documentation is too hard? Don’t want to release
specifics that already leaked out with Win2K SP1? I stay employed so its
all good, but I just don’t get the rationale. But hey, I took logic in
college> Be careful what you wish for; this paragraph would make a good bullet
> point argument for DDK.NET.Very good point. My statements are directed at trying to get Microsoft to
loosen the belt on information related to architecture and primarily
INTERFACES!! Not to get Microsoft to try to solve my problems, which they
generally seem to have no clue about, for me.Bill M.
“Tim Roberts” wrote in message news:xxxxx@ntdev…
>> Bill McKenzie wrote:
>>> …
>>> And btw, whoever it was that said no one should be messing around in the
>>> kernel…I could not disagree more!!
>>>
>>> The whole point I am trying to make here is that messing around in the
>>> kernel should be a LOT more straight forward. I think Microsoft too has
>>> the mindset that they are the only ones who should be messing around in
>>> the kernel and that just isn’t reality. Windows is a vehicle. It isn’t
>>> the end game, it is the enabler for people to bring great technologies to
>>> bear on the PC. Thus it has to be extensible. There is no way you can
>>> move most drivers to user-mode today. It just won’t work for most
>>> technologies. The performance requirements generally won’t allow it.
>>>
>> If only things were this simple.
>>
>> The people on this mailing list have a very peculiar slant into the
>> Windows world. We work in the dirty underbelly, where the rubber meets
>> the road. We want it to be easy to do the things we need to do.
>>
>> But we are NOWHERE near typical, and it’s awfully easy to lose sight of
>> this. Accurate to at least three decimal places, 100% of the Windows
>> users in the world are corporate users who just need a tool to run Outlook
>> and Office. Kernel coding is irrelevant, because their corporate IT
>> departments won’t let them plug in our cool products anyway. They just
>> want something they can turn on in the morning and use to produce another
>> day’s worth of paperwork.
>>
>> That’s where Microsoft’s money comes from, and they are perfectly correct
>> in concentrating the vast majority of their efforts on keeping those
>> people happy.
>>
>> We are irrelevant. Opinionated and noisy, yes, but irrelevant
>> nonetheless.
>>
>>> So, just make kernel development less painful. I mean, I have been
>>> working in the Windows kernel for over 10 years and I still run into
>>> tasks that take inordinate amounts of time and “careful observation” to
>>> get done.
>> Be careful what you wish for; this paragraph would make a good bullet
>> point argument for DDK.NET.
>>
>> –
>> Tim Roberts, xxxxx@probo.com
>> Providenza & Boekelheide, Inc.
>>
>>
>
>
>
> If you want a really stable OS you may not rely on the 3rd party driver playing
“nice” and correctly forward your physical addresses to the device. You have to enforce it.
Exactly. This is the reason why I spoke about DMA validation…
There could still be a programming error - or a DMA hardware fault
- that leads to the wrong physical addresses being used.
With some certain tricks you can make the OS ensure that a driver is using only those parts of a physical memory that it is allowed to access. Therefore, the maximum that a buggy driver can do is screwing up its *own* memory and not the one that belongs to other drivers, let alone to the OS.
No, the “enforcement” needs to be done in hardware, I’m afraid.
Well, of course hardware will do it better. However, please note that we are speaking about loosely-coupled GPOS that is concerned about portability. Therefore, it would be naive to
expect something like that from OS designers. If you don’t believe me, look at segmentation,
four privilege levels and ability to perform task switches in the hardware - although x86 offers these features, all major OSes totally ignore them, and it is, apparently, done for the reasons of portability…
Anton Bassov
“Martin O’Brien” wrote in message
news:xxxxx@ntdev…
> >With the rate at which virus
> > scanning software is failing and causing stability issues in Windows, I
> > cannot believe they aren’t making this connection.
>
> I agree with you that documenting these things would be a good idea, but
> they have made the connection - AV’s don’t exist on Vista/Longhorn.
>
> mm
Actually a number do exist, the crappy ones McAffee and Symantec had
problems because of their tradition of hooking. Now, what I would argue for
security type stuff is not so much samples, but a review of how they expect
people to do some of these things. The document they put out about
techniques in this area was a joke. I suspect if they really sat down with
the developers they would realize they needed new API’s, but then of course
no one would approve them to be retrofited to previous OS’es so the API’s
would still be worthless.
–
Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply
I’m seeing some surprisingly naive thinking in this thread.
(1) More documentation? You’re whining about a symptom and not a cause. The root cause of the problem isn’t lack of documentation about the OS internals, the root cause of the problem is forcing every single person who wants to write a driver to know ALMOST EVERYTHING about how the OS internals work. Heck, they could publish everything in the \ke, \mm, and \io directories and I suspect you wouldn’t decrease the incidence of BSODs on customer systems in any measurable amount.
Don’t you SEE? The vast majority of driver writers don’t want to – and don’t have the time to – become driver development experts. It’s not their primary job. Most just happen to work for some company that needs a driver so that company can sell their hardware. For these people, you could provide all ultimate total documentation, and it’d be so damn dense and complicated that they STILL couldn’t write reasonble drivers.
Mr. McKenzie makes this point himself: He’s been writing drivers for ten years and he still makes mistakes. GUESS WHAT: I’ve been writing Windows NT drivers for more than 15 years, it’s ALL I do, I try to do it as well as I can, I spend a looot of time on it, and I still make mistakes and I still learn something (almost) every week on this list.
(2) The samples? This is particular difficult. While, in general, I agree that the samples are not as good as they should be, you’ve got to recognize that there are two categories of samples:
(a) Samples that are written to demonstrate how to use a technology or interface. These are written by the device teams (or similar) with the idea of being samples. These have historically been terrible. And, even given that they’ve been re-worked in many cases, I think they are still mostly deficient. This is because the people writing the samples don’t “get” how to write a sample that teaches folks something, and they don’t understand what people reading the sample need.
(b) Samples that are real drivers, and are part of the OS. These drivers, like the disk class driver and the file systems, are real drivers, written by real MS kernel devs, and are part of the OS. In many cases, these samples do not demonstrate best practices. When this is the case, this is usually because (1) the sample was written before the best practices were developed or the best practices have changed throughout the life of the sample, or (2) the real driver doesn’t need to be aware of or work “down level” on older versions of Windows.
Now, let’s be realistic: There’s no way a dev who owns a working driver in the Windows distribution is going to grab that driver and updated it just because the idea of what consitutes “best practice” has changed. Heck, that driver is in use in MILLIONS of Windows systems… you don’t just change stuff like that… it doesn’t make sense. It ain’t broke.
In terms of this last point, I *will* remind the community to be careful what you wish for. If you ask MSFT to include in the WDK only those samples that PFD clean and don’t use deprecated interfaces, you’re asking for many of the “real” drivers in the kit to be removed. Is that what you want?
Peter
OSR
>There’s no way a dev who owns a working driver in the Windows distribution
is going to grab that driver and updated it just because the idea of what
consitutes “best practice” has changed.
…which raises quite interesting question…
The OS has not changed, but our *perception* of what the “best practice” is has. Probably, our perception is just not so-well founded, so that we all repeat like parrots something that we have heard from someone else and discourage “improper” technique that, in actuality, works perfectly well on millions of systems??? Let’s face it - if newly-“discovered” technique offered a significant
advantage over the “legacy” one, surely the driver owner would update his driver…
Anton Bassov
> Actually a number do exist, the crappy ones McAffee and Symantec had
problems because of their tradition of hooking. Now, what I would argue
for security type stuff is not so much samples, but a review of how they
expect people to do some of these things. The document they put out about
techniques in this area was a joke. I suspect if they really sat down
with the developers they would realize they needed new API’s, but then of
course no one would approve them to be retrofited to previous OS’es so the
API’s would still be worthless.
Exactly right! Hmmm…maybe the realization on Microsoft’s part that I am
talking about reaches down to a change in habits and methodologies in how
business gets (or doesn’t get) done??
Documenting architectural goals and interfaces would go a LONG way in
solving the problem. And it’s cheap! I don’t want/need another miniport
model, framework or what have you. That usually just compounds the problem,
especially with the lack of source that seems to be the modus operandi.
Bill M.
“Don Burn” wrote in message news:xxxxx@ntdev…
>
> “Martin O’Brien” wrote in message
> news:xxxxx@ntdev…
>> >With the rate at which virus
>> > scanning software is failing and causing stability issues in Windows, I
>> > cannot believe they aren’t making this connection.
>>
>> I agree with you that documenting these things would be a good idea, but
>> they have made the connection - AV’s don’t exist on Vista/Longhorn.
>>
>> mm
>
> Actually a number do exist, the crappy ones McAffee and Symantec had
> problems because of their tradition of hooking. Now, what I would argue
> for security type stuff is not so much samples, but a review of how they
> expect people to do some of these things. The document they put out about
> techniques in this area was a joke. I suspect if they really sat down
> with the developers they would realize they needed new API’s, but then of
> course no one would approve them to be retrofited to previous OS’es so the
> API’s would still be worthless.
>
>
>
> –
> Don Burn (MVP, Windows DDK)
> Windows 2k/XP/2k3 Filesystem and Driver Consulting
> Website: http://www.windrvr.com
> Blog: http://msmvps.com/blogs/WinDrvr
> Remove StopSpam to reply
>
>
>
This is true… SOMEtimes.
The problems most often cited are those with PnP/Power in some of the older samples that are real drivers. In truth, these drivers were written *before* PnP/Power code was frozen and “best practices” were defined (do you remember Win2K? Ugh). Hence, they’re outdated in much the same way that the power management guidelines in Oney are outdated. They work, but they pre-date the final OS code and refinements that have followed-on from that code.
Peter
OSR
> The problems most often cited are those with PnP/Power in some of the older samples
that are real drivers. In truth, these drivers were written *before* PnP/Power code was
frozen and “best practices” were defined
Well, then they are already not really drivers that come with the OS, are they?? At this point I would agree with Mr.McKenzie - indeed, it just does not make sense to provide sample drivers that were written for W2K (or.probably, even earlier) in Vista, let alone W2K8, WDK. In other words, I think it should be either *current* samples or none at all. Otherwise, it just adds to the confusion…
Anton Bassov