That is correct if you compress the bitmaps before sending them to the
client you will reduce the amount of data being sent and be able to generate
more FPS. There are two other factors though in that you may need to either
drop frames or buffer at the client before playing the stream. Also if you
want to synchronize sound you have more work to do because only dealing with
the video portion you will only ‘by chance’ get sound to be in sync since it
is sent via a separate virtual channel.
There are several methods that deal with video the first is the method used
by “Video Frame” in which you recompress the video using a proprietary low
bandwidth format, then just stream that format from the server to the client
using your own virtual channel and the client then becomes a player but it
can be a thin player since you can ship your codec with the virtual channel.
Another method is what Citrix did with Multimedia Acceleration which
basically requires then a thick client. You intercept the media stream and
do not decode it at the server but send it to the client. This then
requires that the client have the codec to decompress whatever video is
being streamed. Depending on the size of the media would also determine
what kind of bandwidth you would require to play it back. Of course you can
also buffer the data and if you stream the full codec the codec contains
both sound and audio so you do not have to deal with synchronization issues.
The next part of your question is about a mirror driver in the context of a
session. The problem is that since you are now mirroring the display you
are not taking over the remote display driver, so that driver will still
stream the image down to the client anyway wasting bandwidth. Secondly,
drivers do not run everything in dispatch level, they can run in passive
level so that is not an issue the issue you have is making sure your codec
would run properly in the kernel (no bugs) as well as if you require
floating point operations to properly preserve thread state (calling a few
save/restore apis).
Of course you can relay the crap back to a user mode service and compress it
there; you just want to be efficient in anything you do. You still have the
sound synchronization to deal with and you have the issue that you need to
stop the remote display driver from sending these images and wasting
bandwidth.
Also, depending on what the application is doing (movie, opengl, directx,
etc.) to do efficient remoting you would really need to case study each of
these application types and determine what is the optimal approach.
Remember, the default graphics display remoting is tuned for GDI and works
well right? But breaks down since its not a general one size fits all
solution. In fact, OpenGL and DirectX are also implemented differently in
the driver level for hardware access, to give an example, so obviously it’s
the same but in the remote context.
To give a different example, say OpenGL, perhaps you want to render the
image on the server hardware then send the image to the client, compressed,
intercepting OpenGL APIs. Here is an example of this
(http://www.thinanywhere.com/).
-----Original Message-----
From: xxxxx@yahoo.com [mailto:xxxxx@yahoo.com]
Sent: Tuesday, November 20, 2007 8:11 PM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] Accessing bitmaps in terminal server
That means, if I can somehow compress the bitmaps/frmaes and send the
compressed data to the client, I could reduce the amount of data
transmitted, isn’t it?
Can I use a mirror driver to access these bitmaps? Will it cause a problem
because the driver will be running in DISPATCH LEVEL, but the compression
algorithms ( if used ) should be running in PASSIVE LEVEL?
I have gone through so many resources. I am writing based on what I have
understood. I may be absolutely wrong also.
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer