Do the data types for Windows drivers follow LP64? IOW, is a ULONG on
64bit hardware 64bits wide?
Thanks,
-PWM
Do the data types for Windows drivers follow LP64? IOW, is a ULONG on
64bit hardware 64bits wide?
Thanks,
-PWM
No, a ULONG is 32 bits.
See the MSDN topic “Windows Data Types”.
http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx
Thomas F. Divine
http://www.pcausa.com
From: “Peter W. Morreale”
Sent: Monday, February 15, 2010 4:03 PM
To: “Windows System Software Devs Interest List”
Subject: [ntdev] Is Windows LP64?
>
> Do the data types for Windows drivers follow LP64? IOW, is a ULONG on
> 64bit hardware 64bits wide?
>
> Thanks,
> -PWM
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
> Do the data types for Windows drivers follow LP64? IOW, is a ULONG on
64bit hardware 64bits wide?
No, ULONG is still 32bit, size_t and ULONG_PTR are 64bit.
Use size_t instead of ULONG whenever possible and makes sense - this is the simplest way of x64 port.
–
Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com
On Tue, 2010-02-16 at 00:12 +0300, Maxim S. Shatskih wrote:
> Do the data types for Windows drivers follow LP64? IOW, is a ULONG on
> 64bit hardware 64bits wide?No, ULONG is still 32bit, size_t and ULONG_PTR are 64bit.
Use size_t instead of ULONG whenever possible and makes sense - this is the simplest way of x64 port.
nod. Than IIUC the compiler is ANSI and the native C types are LP64?
Thx,
-PWM
Peter W. Morreale wrote:
On Tue, 2010-02-16 at 00:12 +0300, Maxim S. Shatskih wrote:
>> Do the data types for Windows drivers follow LP64? IOW, is a ULONG on
>> 64bit hardware 64bits wide?
>>
> No, ULONG is still 32bit, size_t and ULONG_PTR are 64bit.
>
> Use size_t instead of ULONG whenever possible and makes sense - this is the simplest way of x64 port.
>nod. Than IIUC the compiler is ANSI and the native C types are LP64?
No, the native C types are ILP64, in Unix terms. “long” is 32-bit. Not
sure what ANSI has to do with it.
In my opinion, this was a huge mistake on Microsoft’s part, but they
didn’t ask me.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
I was disappointed at some of the 64-bit decisions made long ago. I could not believe the page size stayed down at 4K. I was even more disappointed when Microsoft decided int and long would both be 32-bits and instead invented ugly and proprietary evil things like __int3264 with their compilers.
Today the data type “long long” has become the defacto industry standard way to define a 64-bit type and constants can use a LL suffix such as #define NUM 4LL
Peter W. Morreale wrote:
Do the data types for Windows drivers follow LP64? IOW, is a ULONG on
64bit hardware 64bits wide?
No, it’s LLP64.
You wrote:
I was disappointed at some of the 64-bit decisions made long ago. I
could not believe the page size stayed down at 4K.
Well, that wasn't really Microsoft's decision to make...
Today the data type "long long" has become the defacto industry standard
way to define a 64-bit type and constants can use a LL suffix such as
#define NUM 4LL
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
> No, the native C types are ILP64, in Unix terms. “long” is 32-bit.
Not
sure what ANSI has to do with it.In my opinion, this was a huge mistake on Microsoft’s part, but they
didn’t ask me.
My guess is the choice was swayed to preserve people’s on disk and network
data formats. A lot of people write binary C structures to disk, or send
them over network channels. It would be kind of ugly to have the same source
code compiled under 32-bit vs 64-bit read/write incompatible file formats.
This is a side effect of many C programmer’s frame of mind that they “know”
the data layout generated by the compiler. Clearly, if long varied between 4
or 8 bytes, a programmer, just by looking at the source code, has no idea
what the binary format of a structure will be, it will depend on the
compilation context. If long stays the same size, those C programmers can
keep on living this fantasy. Since you generally don’t write pointers to
disk or send them over networks, the size of a pointer changing causes less
disruption for developers.
Seems like no matter what the choice was, some people would not like it.
I’d argue the REAL mistake was long ago when the C language designers didn’t
specify the size of int and long, they just left it an implementers choice,
and implementers make different choices. Lots of language definitions DO
specify the size of basic types.
Jan
On Tuesday 16 February 2010 09:58:02 Jan Bottorff wrote:
I’d argue the REAL mistake was long ago when the C language designers
didn’t specify the size of int and long, they just left it an implementers
choice, and implementers make different choices. Lots of language
definitions DO specify the size of basic types.
This problem was fixed 10 years ago with C99 and the new types. Unfortunately
it seems too many people learned to use C++ to overcome the limitations of C89
so the compiler writers have been focusing on adding new features to C++
instead of improving C99 support.
–
Bruce Cran
> My guess is the choice was swayed to preserve people’s on disk and network data formats
The same argument could (and probably would) have been made going from Win16 to Win32 and so using that logic we would today still have a 16-bit int.
The LL and ULL suffixes have been part of Microsoft’s compilers for quite some time
For the exact same amount of time “long long” existed I believe. Before that one had to tack on I64 and UI64 at the end of constants.
4K page…Well, that wasn’t really Microsoft’s decision to make…
Perhaps, though someone could have mentioned to AMD that a page size appropriate in the 1960’s IBM 360 architecture might be a bit out of date.
“Jan Bottorff” wrote in message
news:xxxxx@ntdev…
>
> I’d argue the REAL mistake was long ago when the C language designers
> didn’t
> specify the size of int and long, they just left it an implementers
> choice,
> and implementers make different choices. Lots of language definitions DO
> specify the size of basic types.
>
Actually, most older languages which C can be included did not specify the
size of basic types. This is the case for almost any ANSI standard I can
think of pre-1995 (I was out of compilers by then so I will not say for
sure).
Of course part of the reason you did not specify things was that we had back
in that era, 60-bit machines, 32-bit machines, 16 bit machines and 8 bit
machines not to mention the weird ones where the pointers were differing
sizes than the data (I can off the bat think of systems where the pointer
registers were defined [not just a limit but defined] to be 17 bits, 24
bits, 35 bits). Then throw in the fun the mix that not all machines had
native character pointers and machines where pointers to int and pointers to
char had differing representations.
–
Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Information from ESET NOD32 Antivirus, version of virus signature database 4870 (20100216)
The message was checked by ESET NOD32 Antivirus.
http://www.eset.com
wrote in message news:xxxxx@ntdev…
> The same argument could (and probably would) have been made going from
> Win16 to Win32 and so using that logic we would today still have a 16-bit
> int.
However, changing int from 16 to 32 bit has add much more value then
changing int from 32 bit to 64 bit.
>> 4K page…Well, that wasn’t really Microsoft’s decision to make…
>
> Perhaps, though someone could have mentioned to AMD that a page size
> appropriate in the 1960’s IBM 360 architecture might be a bit out of date.
>
Large Pages are 4 MB (2MB with PAE) and have been so even in 32 bit IA-32
starting with the Pentium processor. I believe support for large pages was
introduced with Server 2003, but I may be mistaken and could be even older.
Default page size is 4kB for compatibility and since there already is an
option for large pages, why introduce another page size value?
–
Aram Hăvărneanu
I strongly disagree.
LLP64 makes conversion of code from 32-bit to 64-bit MUCH easier, both conceptually and practically.
In terms of “how long is int”, or “what’s the ANSI spec”… when you’re writing kernel-mode code for Windows that shouldn’t really enter into the equation. When’s the last time you saw a kernel-mode data structure with an “int” field??
When you write code for Windows, you should be using the set of OS-defined, abstract, platform agnostic, data types. This isolates you (some, not totally of course) from the underlying processor architecture and lets you instead concentrate on the data items you need to store.
A ULONG is always 32-bit long and unsigned. End of story. That’s what ULONG *means* in Windows, it doesn’t matter what platform you’re on. Likewise all the other abstract types. Pointers are of necessity sized according to the underlying processor architecture. And, if you want to deal with a pointer in a way that you deal with a ULONG – that is, have a pointer precision variable that you use for unsigned arithmetic operations – there’s ULONG_PTR. OK, the name isn’t great, but it’s there.
LP64 constantly forces you to ask “HOW long is this data item” and “WHAT machine am I running on” when you look at a data item? When LLP64, you a priori KNOW the length of your data items, unless they’re pointers, which you should expect will be “long enough” to allow you to, you know, point to things.
LLP64 makes it pretty darn simple to move drivers from 32-bit to 64-bit (or back). That’s not something that you can say about LP64, at least not in my experience.
Peter
OSR
Perfect. Tim says that Windows is ILP64 and Peter says LLP64.
Which is it please?
Thank you,
-PWM
On Tue, 2010-02-16 at 10:27 -0500, xxxxx@osr.com wrote:
I strongly disagree.
LLP64 makes conversion of code from 32-bit to 64-bit MUCH easier, both conceptually and practically.
In terms of “how long is int”, or “what’s the ANSI spec”… when you’re writing kernel-mode code for Windows that shouldn’t really enter into the equation. When’s the last time you saw a kernel-mode data structure with an “int” field??
When you write code for Windows, you should be using the set of OS-defined, abstract, platform agnostic, data types. This isolates you (some, not totally of course) from the underlying processor architecture and lets you instead concentrate on the data items you need to store.
A ULONG is always 32-bit long and unsigned. End of story. That’s what ULONG *means* in Windows, it doesn’t matter what platform you’re on. Likewise all the other abstract types. Pointers are of necessity sized according to the underlying processor architecture. And, if you want to deal with a pointer in a way that you deal with a ULONG – that is, have a pointer precision variable that you use for unsigned arithmetic operations – there’s ULONG_PTR. OK, the name isn’t great, but it’s there.
LP64 constantly forces you to ask “HOW long is this data item” and “WHAT machine am I running on” when you look at a data item? When LLP64, you a priori KNOW the length of your data items, unless they’re pointers, which you should expect will be “long enough” to allow you to, you know, point to things.
LLP64 makes it pretty darn simple to move drivers from 32-bit to 64-bit (or back). That’s not something that you can say about LP64, at least not in my experience.
Peter
OSR
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminarsTo unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
Jan Bottorff wrote:
> In my opinion, this was a huge mistake on Microsoft’s part, but they
> didn’t ask me.
>My guess is the choice was swayed to preserve people’s on disk and network
data formats. A lot of people write binary C structures to disk, or send
them over network channels. It would be kind of ugly to have the same source
code compiled under 32-bit vs 64-bit read/write incompatible file formats.
Oh, pooh. The solution for that problem is to declare the 32-bit parts
of the structure be “int”. Or, better yet, use a few #ifdefs to declare
your own types of the sizes you need. At the risk of launching another
unwanted holy war, Unix programmers have been doing this for decades.
There is lots of Unix code that compiles just fine on 16-bit, 32-bit,
and 64-bit platforms. I don’t understand why Microsoft couldn’t trust
us to do the same.
This is a side effect of many C programmer’s frame of mind that they “know”
the data layout generated by the compiler. Clearly, if long varied between 4
or 8 bytes, a programmer, just by looking at the source code, has no idea
what the binary format of a structure will be, it will depend on the
compilation context. If long stays the same size, those C programmers can
keep on living this fantasy.
The thing that bothers me most about this is the grandmotherly
philosophy. After a rather small amount of pain, we programmers managed
to survive the 16-to-32 transition without permanent scars. We widened
our pointers, we changed our concept of “int”, we learned better coding
habits. In the interim, apparently we have grown lazy, and Microsoft
believes we are now all too stoopid to handle a 32-to-64 transition
without having a Boy Scout from Redmond hold our hands at every street
corner. The “32-bit long” decision is merely one manifestation of
this. The whole “64-bit goes in System32 and 32-bit goes in SysWow64”
idiocy is another prime example. How could that possibly have made it
past a design review without being laughed out of the building? File
system redirection and registry redirection are yet more examples. ALL
of those things are completely unnecessary complications – additional
opportunities for things to go wrong, causing confusion for people who
work at lower levels, for essentially zero net gain.
I guess you can tell I feel strongly about this.
I’d argue the REAL mistake was long ago when the C language designers didn’t
specify the size of int and long, they just left it an implementers choice,
and implementers make different choices. Lots of language definitions DO
specify the size of basic types.
Yes. C99 solves this with the very handy int8_t, int16_t, int32_t,
uint8_t etc. types.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
Don Burn wrote:
Actually, most older languages which C can be included did not specify the
size of basic types. This is the case for almost any ANSI standard I can
think of pre-1995 (I was out of compilers by then so I will not say for
sure).
That’s a good point. Many Fortran compilers included INTEGER*2 and
INTEGER*4, but those were extensions to the standard. Pascal’s types
were all implementation-defined widths, although you could use the
clever subrange type to create your own.
Having spent 10 years at Control Data, producer of machines with 12-bit
and 60-bit data words and 18-bit addresses, I was particularly sensitive
to chauvinistic programming assumptions about things like “bytes” and
“characters”…
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
xxxxx@osr.com wrote:
I strongly disagree.
LLP64 makes conversion of code from 32-bit to 64-bit MUCH easier, both conceptually and practically.
In terms of “how long is int”, or “what’s the ANSI spec”… when you’re writing kernel-mode code for Windows that shouldn’t really enter into the equation. When’s the last time you saw a kernel-mode data structure with an “int” field??
…
A ULONG is always 32-bit long and unsigned. End of story.
And why is that? It is a relic of the circa 1993 conversion from 16-bit
to 32-bit, because it was the only type guaranteed to be 32-bit in both
environments. Now, like the apocryphal tale that our current railroad
spacing derives from the size of a horse’s behind, are we to be tied to
this accidental relic forever? Should this 18-year-old typedef choice
really be influencing Microsoft’s decisions in fundamental compiler
choices? I think not.
What Microsoft SHOULD have done is spent 4 hours creating an include
file defining INT32 and UINT32 and INT64 and UINT64, etc. When I see
this in an include file, I know some programmer was smoking something
stronger than tobacco:
typedef int INT;
typedef long LONG;
That’s idiotic. ULONG is as well.
LP64 constantly forces you to ask “HOW long is this data item” and “WHAT machine am I running on” when you look at a data item?
Or, alternatively, and better, you define your own compiler-independent
types, or use some pseudo-standard include file that eliminates the issue.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
I lived through the early days of C (started using in in 1975) and the
concept of define your own compiler-independent types sounds great but in
practice has SUCKED! First if you don’t make a corporate standard, then
reading someone elses code is a nightmare since types are not consistant.
Second, even if you do code comes from other places, so you get mixed type
conventions or else a support nightmare of conversions every time a code
drop appears.
I personally agree with Peter. Microsoft changed from 16 to 32 bit because
most of the world used the conventions they accepted for 32 bit. But when
the 64-bit stuff was comming in, there was no standard, in fact most of the
C compiler writers were stick with the convention we already have (or else
have a command line switch to have multiple conventions).
Having lived in the world of mixed machines (forget the 36-bit systems in my
earlier post) and C compilers where everyone was different I applaud
Microsoft for being conservative on their transitions.
–
Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
“Tim Roberts” wrote in message news:xxxxx@ntdev…
> xxxxx@osr.com wrote:
>>
>>
>> I strongly disagree.
>>
>> LLP64 makes conversion of code from 32-bit to 64-bit MUCH easier, both
>> conceptually and practically.
>>
>> In terms of “how long is int”, or “what’s the ANSI spec”… when you’re
>> writing kernel-mode code for Windows that shouldn’t really enter into the
>> equation. When’s the last time you saw a kernel-mode data structure with
>> an “int” field??
>> …
>> A ULONG is always 32-bit long and unsigned. End of story.
>
> And why is that? It is a relic of the circa 1993 conversion from 16-bit
> to 32-bit, because it was the only type guaranteed to be 32-bit in both
> environments. Now, like the apocryphal tale that our current railroad
> spacing derives from the size of a horse’s behind, are we to be tied to
> this accidental relic forever? Should this 18-year-old typedef choice
> really be influencing Microsoft’s decisions in fundamental compiler
> choices? I think not.
>
> What Microsoft SHOULD have done is spent 4 hours creating an include
> file defining INT32 and UINT32 and INT64 and UINT64, etc. When I see
> this in an include file, I know some programmer was smoking something
> stronger than tobacco:
> typedef int INT;
> typedef long LONG;
>
> That’s idiotic. ULONG is as well.
>
>> LP64 constantly forces you to ask “HOW long is this data item” and “WHAT
>> machine am I running on” when you look at a data item?
>
> Or, alternatively, and better, you define your own compiler-independent
> types, or use some pseudo-standard include file that eliminates the issue.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
>
> Information from ESET NOD32 Antivirus, version of virus
> signature database 4872 (20100216)
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
>
Information from ESET NOD32 Antivirus, version of virus signature database 4872 (20100216)
The message was checked by ESET NOD32 Antivirus.
http://www.eset.com
Don Burn wrote:
Having lived in the world of mixed machines (forget the 36-bit systems in my
earlier post) and C compilers where everyone was different I applaud
Microsoft for being conservative on their transitions.
But at what cost? As it stands today, there is no standard-compliant
type in x64 Visual C++ that maps to the processor’s native register
size. That is just not a healthy situation. After all, that’s what C’s
types were supposed to be for. And what happens in the NEXT
transition? Do they introduce a “long long long” type for 128-bit ints?
Again, as an existence proof, I would point to the wildly heterogeneous
world of Unix, where 32-bit ints and 64-bit longs have lived a happy and
peaceful coexistence for a long time.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.