I would say getting rid of the paging file and as a consequence the
distinction of different pools, the new possibilites (as all memory now is
resident) regarding loser IRQL restrictions and rewriting the code to take
advantage of it is a major challenge and even a bigger challenge to get it
to work with all existing 3rd party kernel code. It’s not something I would
call low hanging fruit.
I realize that especially because of the compatibility issues the impact of
this change would be not as great as it could possibly be. Example: a DPC
routine schedules a work item because it needs to run at PASSIVE_LEVEL to
touch paged memory. This can now be rewritten to do all its work inside the
DPC as all memory now is resident but if this is a third party driver it
will be hard to get it to take advantage unless it’s rewritten.
//Daniel
“Pavel Lebedinsky” wrote in message news:xxxxx@ntdev…
> However, if you look into what it takes to make Windows
> run well on this kind of hardware, you’ll find that there is
> virtually no low hanging fruit. A typical Windows system
> has hundreds if not thousands of individual components,
> and it’s highly unlikely that any given change/bugfix would
> give you more than 1% improvement in overall memory
> usage, disk IO or CPU cycles. But hundreds of such
> changes combined can make a lot of difference.
>
> This by the way is another reason (besides VM density)
> why we still care about paged code. A few pages per driver
> doesn’t sound like much, but if you want to improve the
> memory footprint of Windows, or at least make sure it
> doesn’t regress too much from the previous release, you
> have to worry about things like that (plus dozens of other
> similarly small things).
>
> –
> Pavel Lebedinsky/Windows Kernel Test
> This posting is provided “AS IS” with no warranties, and confers no
> rights.
>
>
xxxxx@broadcom.com wrote:
In Citrix/TS world, the sessions were not virtual machines.
Literally, no, of course not, but the architecture is metaphorically
similar. The separation of session space is less strict than VM memory
separation. In some ways, a TS session represents a software simulation
of a VM. The fact is that many of the same concepts exist, and the
restrictions, tradeoffs, and lessons learned still apply.
They were sharing the kernel and (I guess) read-only file-mapping objects for system DLLs and EXEs.
Nope. One of the tricky parts of working in the Citrix world is that
you had multiple instances of many drivers. That’s why memory is such a
critical resource in a Terminal Services system. The number of sessions
you could support was directly linked to the amount of memory in the
server, so much so that there are relatively reliable formulas for IT
managers to use.
The typical installed memory I suppose was an order of magnitude less that in modern times.
Perhaps; it was not uncommon to have 1GB and 2GB Terminal Servers in the
mid-to-late 1990s. However, I don’t see how that is relevant to the
discussion. The absolute numbers change, but the concepts are all relative.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
> Paging strategy of Windows cache and VM manager has been a favorite bitch of mine for quite long
time. Imagine that you’re writing a DVD with a few 700 MB files, and open IE.
Things are even worse:
- create several ~10GB files on SMB server
- touch each of them - just open, read 1 byte and close - from another machine via SMB
- the SMB server is down to its knees due to cache pressure
would notice those runaway applications, like IE with JScript-heavy pages, and crappy state of SMB
server/client CPU consumption
In Vista+, SMB often requires 30 seconds or so to log on.
–
Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com