Someone eats LPT port access of a DOS box occationally

It is hard to conjecture about possible solutions. I had a Win16 program
that directly manipulated the serial port registers. We needed to run it
under Win32. The program used a substantial 16-bit support library that
was an abandoned product; the company had gone out of existence. There
was no way we could port this monster to Win32 without also rewriting that
library, whose use was pervasive.

I solved it in an interesting way. Because the original programmer had
the good taste to wrap the _inp and _outp calls in higher-level
abstractions, what I did was write a 32-bit app that communicated to the
serial port, and replaced the bodies of those functions with interprocess
communication calls to my Win32 process. The interrupt handler for the
original Win16 serial ports was replaced by a handler for messages the
co-process would send (well, post, actually). Not only did this solve the
problem, but a few weeks after delivery, the clientasked “Can you support
UDP as a source of information?” Two days later, it had UDP support. He
sent it to beta sites. About a week after that, he asked “Can you support
TCP/IP?” Three days later I sent him a version that supported TCP/IP.
Then he asked, “Could you do a text-to-speech output for user-designated
messages?” That took a bit longer, because many of the messages had to be
transformed for the Microsoft TTS component; for example, “5/1/05 03:10”
would produce neaningless speech output. I had to send “five one oh five
at oh three ten” to make it come out right, and try to do this in a way
that could be localized. But two weeks after the request was made we had
a talking app. All because we needed to handle _inp and _outp.

Now, there are some other approaches possible. I will not say these are
better than a VDD, or easier, or any other comparison. But they might be,
depending on how the app was written.

One of the first things I considered was creating a library for the 16-bit
app in which I replaced the routines _inp and _outp with my own versions.
I had to abandon this idea after a hour or so of research, because there
was really no good way to call Win32 APIs from an NTVDM, and Win32S
(anyone remember that?) did not support the APIs we needed. In addition,
it seemed impotant to know /why/ that particular IN or OUT instruction was
being executed. As I said, I was fortunate to have had the original
programmer think at a higher level of abstraction, so if I moved up a
level to source code I knew the rationale of the call, and its role in the
activity of the calling program, so I didn’t /need/ to simulate it. I
could ignore it entirely, in most cases, and just send the string to be
written across to the co-process. Done.

Another consideration was to run the 16-bit process under a debugger-like
app, and when it got an illegal instruction trap for IN or OUT, simulate
it, and resume at the IP Address. I tried a simple experiment, and it
didn’t work too well when the contained process was an NTVDM, and by that
time, I’d already thought about the co-process solution, so didn’t waste
any more time going down that path.

An even more grotesque solution is one I have seen in an actual product.
It was a FORTRAN compiler, and tbe problem needed efficient bit-shifting
and masking. So the programmer would write
RESULT = SHIFTRIGHT(DATA, 3)
or
RESULT = BITAND(ARG1, ARG2)

When the BITAND subroutine was called (this was on relatively “dead-slow”
machines by modern standards, such as its actual host, the PDP-11, 300
KIPS) it reached out via the return address, and replaced the CALL and
parameter setups with an ASR instruction or an AND instruction, patched
the actual running code, and thereafter the instructions were executed
inline. Sort of a “Poor Man’s LTCG”, to use modern terms.

I just throw these ideas out, but emulating a raw IN or OUT instruction
may not be the best solution, even if done in a VDD. Note that my
solution (which did require modifying the 16-bit source code, but mostly
by removing code) was the most effective because it implemented the
abstractions (“send bytes to serial port”) instead of having to precisely
simulate the hardware behavior. Given that this is a VDD, it seems that
it would be called to simulate abstractions (“write string”, “drop flow
control signal”! instead of working at a lower level. Given my experience
with the highly-successful co-process approach, I’d investigate what APIs
a VDD could access to provide IPC.

Otherwise, it might be that this could be as much fun as kicking a dead
whale down the beach.
joe

To the OP:

Have you ever written a virtual CPU program? this used to be one of the
standard assignments in computer science courses, but I have no idea what
is
taught now or when / where you went to school.

Assuming that performance is not an issue, which we do not expect given
that
the program was likely written for 486 or slower machines, you could write
a
wrapper program that interpreted the 16 bit code instead of NTVDM. You
would be guaranteed to have complete control over the program’s execution
and depending on what the program does it isn’t that much more complex
then
what you are already doing. The big part is decoding the instructions,
but
once you have that, it is just a lot of typing to support all of the
instructions and other operations used by your particular program

As Joe mentions though, if you do have the source code, it may be simpler
to
alter the program

“James Harper” wrote in message news:xxxxx@ntdev…

>
> Actually, it was implied. The issue is how to rediect physical LPT
> operations to a USB printer interface. It may have been possible in
> Win2K, but my memory says that USB was not supported in Windows until
> XP.
> XP will not run at all on legacy hardware, so it is an XP+ OS running on
> a
> modern system.

Okay I guess we’re talking different things. I was inferring that they
were
running a legacy application (system) on modern hardware, so I was
disagreeing with a point you weren’t making…

>
> So your above description is just guesswork, anf may or may not reflect
> the actual situation. In addition, if the VDD intercepts 100% of the
> port
> operations, how does it call the host OS USB support (and by “host” I
> mean
> “the 32-bit OS which is running the NTVDM”)

Well this is the main problem I guess… the OP gave a vague description
of
the point he was stuck on and without any further information about what
he
was trying to do, any subsequent discussion is just for shits and giggles.

James


NTDEV is sponsored by OSR

OSR is HIRING!! See http://www.osr.com/careers

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

xxxxx@flounder.com wrote:

I know that in NT4 it was possible to use 16-bit drivers, but i thought
that went away with Win2K, about 14 years ago.

It’s not a 16-bit driver. A VDD is a 32-bit driver that supports 16-bit
applications. They still work perfectly well in the 32-bit versions of
Windows 8. They HAVE to work, or 16-bit apps would not operate.
Remember that, in order for a 16-bit applications to do ANY kind of I/O,
it has to write to VGA ports, among others. All of those writes are
trapped and handled by a VDD.

Trapping reads and rites
to I/O ports sounds so completely bizarre in a modern system (well, any
ststem) that I can’t see the point to it.

Are you serious? Have we really all forgotten where we came from? All
you have to do is set the I/O Permission Mask appropriately, and all
writes to I/O ports in ring 3 become trappable exceptions. If you do
that from 32-bit app today, the exception is exposed. If you do that in
a 16-bit app, it traps to a VDD. It’s not bizarre at all. In fact,
it’s easy.

Besides, if the ports simply
don’t exist, how can you trap operations on them at all?

Now you’re just being silly. All 64k I/O ports always exist. There
might not be any hardware connected to them, but that’s irrelevant in
this context, because the writes are all being simulated in software.

It seems to me that it would be more effective to define a device that
supports the abstract interface and talks to USBD.

Fine idea, unless you have a 16-bit application that doesn’t know it’s
being run in Windows at all.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

On 29-Apr-2013 02:59, xxxxx@flounder.com wrote:

Win98 was not Windows. It was MS-DOS with a GUI interface.

Ouch. We’ve probably scared the OP and he won’t return to explain what
he really is after, I’m afraid…
– pa

henrik.haftmann@mb.tu-chemnitz.de wrote:

I’m writing a Virtual Device Driver (VDD) for DOS applications.
Obviously, for my problem (redirecting parallel port access to a USB->ParallelPrinter converter), hooking the well-known LPT addresses is mantadory, I cannot move to other addresses.

Are you sure that you need to write a driver at all? From the Server
2003 DDK:

“Note: A VDD need only be written to support special-purpose hardware
devices that operate under a Microsoft MS-DOS application. The provided
VDM [virtual DOS machine] has built-in support for commonly used
hardware such as serial communication ports, video, mouse, and keyboard.
Consequently, you should not need a VDD to virtualize access to these
common devices.”

And at least for serial ports, I can confirm that this is true and that
it works very well. We had some old DOS applications directly hitting
the UART registers. Not only did they run on NT, but much to my surprise
they could even talk to virtual COM ports (in our case on the Ethernet),
as long as these ports had a name in the usual range of COM1 - COM4.