Real Mode

Aram Hăvărneanu wrote:

> I’m not sure whether you’re familiar with it, but this is EXACTLY the
> approach used by the X Window server on Linux. Many of the graphics
> drivers call into INT 10 to set the video mode, and that’s a big problem
> on non-x86 machines, since ROM BIOSes are always x86 machine code. So,
> they have an x86 emulator embedded right in the server that simulates
> the BIOS code. It works surprisingly well.
You’d think open source people would fix their drivers instead…

Nicely snarky, but the fact is that quite a number of Windows display
drivers do exactly the same thing. Twenty-first century graphics chips
have a bazillion obscure registers to configure everything from RAM type
and cycle timing to fine-tuning the color generators. The hardware
teams create known-good configuration sequences and embed them in the
BIOS. It doesn’t make sense to recreate all of that in a graphics
driver if we can just use code that already exists.

In addition, the manufacturers don’t want to release the details of
their low-level configurations into the open source world. Leaving it
in the BIOS protects their IP.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

> If my emulator comes across an OUT DX, AL instruction, I will emulate that by executing WRITE_PORT_UCHAR >with the emulated values of DX and AL.

…but you will do it in the protected mode , right - no matter what you emulate your emulator actually still runs in protected mode . If I got it right, the OP made it clear that,for this or that reason, his device has to be accessed while physical CPU is in the real mode (actually, I think this is not 16-bit real mode but, probably, 32-bit SMM).

In either case, it does not seem to make sense to discuss it until the OP provides us with more details. Otherwise the whole thing may take the route that all “long rambling threads” on NTDEV used to take - the OP asks a question and disappears, and dozen of “regulars” discuses it, eventually going completely off-topic
and making a thread grow to 150+ posts, with at least a third of them mine. As you can see, Aram already tries to go in this direction…

Anton Bassov

xxxxx@hotmail.com wrote:

> If my emulator comes across an OUT DX, AL instruction, I will emulate that by executing WRITE_PORT_UCHAR >with the emulated values of DX and AL.
…but you will do it in the protected mode , right - no matter what you emulate your emulator actually still runs in protected mode . If I got it right, the OP made it clear that,for this or that reason, his device has to be accessed while physical CPU is in the real mode (actually, I think this is not 16-bit real mode but, probably, 32-bit SMM).

Oh, come on. That’s just silly. What he said was that the **BIOS**
code had to run in real mode, as virtually all ROM BIOS code does. Use
your head. Why would the hardware care what mode the CPU was in? How
would it even know?

I’m making assumptions here, of course, but I’m pretty certain this was
not a request from some technical sophisticate pushing the envelope of
the Windows driver world. This was someone who has a piece of hardware
that works in DOS, or Windows 95, who was told to find out what it would
take to make it run in XP. He asked a basic question based on the
assumption that things are still done that way.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

On Fri, Oct 1, 2010 at 7:53 PM, Tim Roberts wrote:
> Aram Hăvărneanu wrote:
>>> I’m not sure whether you’re familiar with it, but this is EXACTLY the
>>> approach used by the X Window server on Linux. Many of the graphics
>>> drivers call into INT 10 to set the video mode, and that’s a big problem
>>> on non-x86 machines, since ROM BIOSes are always x86 machine code. So,
>>> they have an x86 emulator embedded right in the server that simulates
>>> the BIOS code. It works surprisingly well.
>> You’d think open source people would fix their drivers instead…
>
> Nicely snarky, but the fact is that quite a number of Windows display
> drivers do exactly the same thing.

My comment wasn’t supposed in any way to be snarky or injurious to the
Open Source people (I am an Open Source fan myself), but I simply
found strange that people don’t fix their drivers in the open source
country where fixing your drivers is cheaper (anybody can do it) and
people care more about non-IA32 architectures.

> Twenty-first century graphics chips
> have a bazillion obscure registers to configure everything from RAM type
> and cycle timing to fine-tuning the color generators. The hardware
> teams create known-good configuration sequences and embed them in the
> BIOS.

I’m afraid I don’t follow. I though you were talking about drivers,
not firmware. So, you say graphics card vendors put IA-32 code in the
card that’s used by PC 10H interrupt? Are you talking about copying
this code in the drivers and using it (directly or emulated) or are
you talking about calling the code through some mechanism?

Well if card vendors put IA-32 code in the card, that card is designed
to only work on PCs, what’s the point of emulating the code anyway? It
should work in VGA mode. Plus very very few non-IA32 architectures
`care’ about graphics cards that do more then VGA, most don’t care
about VGA mode at all.


Aram Hăvărneanu

Aram Hăvărneanu wrote:

On Fri, Oct 1, 2010 at 7:53 PM, Tim Roberts wrote:
>> Aram Hăvărneanu wrote:
>
> I’m afraid I don’t follow. I though you were talking about drivers,
> not firmware. So, you say graphics card vendors put IA-32 code in the
> card that’s used by PC 10H interrupt?

Yes. In the computer you are running right now, the real-mode INT 10
vector points into the ROM BIOS for your graphics chip. It is 16-bit
real-mode code, although most of it is written to be called from 16-bit
protect mode. Some graphics drivers call into that code to do the
initial mode set.

> Are you talking about copying
> this code in the drivers and using it (directly or emulated) or are
> you talking about calling the code through some mechanism?

I’m not sure there’s a difference. It’s just memory, albeit at an
unusual address. In x86 machines, they jump directly into the code at
its low-memory address. In non-x86 machines, they pass a pointer to the
code into the x86 emulator. ROM access isn’t particularly fast, so it’s
possible they copy the block to RAM first.

> Well if card vendors put IA-32 code in the card, that card is designed
> to only work on PCs, what’s the point of emulating the code anyway?

The vendors are REQUIRED to put IA-32 code in their ROM BIOS if they
hope to have the board run in the trillions of PCs in the world today,
because of the legacy inherited design. Otherwise, you’d see nothing on
the screen until Windows loaded a hi-res driver. There are a lot of
non-x86 machines with PCI buses, but not nearly enough to encourage
graphics card vendors to create a MIPS version of the board, or a
PowerPC version of the board. So, they all have ROM BIOS chips with
16-bit IA-32 code in them.

> It should work in VGA mode.

It only works in VGA mode if someone calls the ROM BIOS to initialize it
in VGA mode.

> Plus very very few non-IA32 architectures
> `care’ about graphics cards that do more then VGA, most don’t care
> about VGA mode at all.

That’s just not true. When people run Linux on such a machine, they
want to run X in hi-res mode. Rather than reinvent or reverse engineer
the complicated startup sequences, it is much more economical to reuse
the code that is already present in the ROM BIOS, which has been
approved by the manufacturer.

PowerPC Macs with PCI buses had this exact situation. They all cared
very much about hi-res graphics cards, and they used off-the-shelf PC
graphics cards to do it.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

> Oh, come on. That’s just silly. What he said was that the **BIOS** code had to run in real mode,

as virtually all ROM BIOS code does. Use your head. Why would the hardware care what mode the
CPU was in?

I thought the OP meant a piece of hardware that just not expects to be directly accessed by anyone, apart from the firmware that runs in SMM ( for example, something like a thermal device on SMBus). This is how I understood it, but I may well be wrong here …

Anton Bassov