Difference between IntPtr and UIntPtr

xxbbcc picture xxbbcc · Nov 1, 2012 · Viewed 6.9k times · Source

I was looking at the P/Invoke declaration of RegOpenKeyEx when I noticed this comment on the page:

Changed IntPtr to UIntPtr: When invoking with IntPtr for the handles, you will run into an Overflow. UIntPtr is the right choice if you wish this to work correctly on 32 and 64 bit platforms.

This doesn't make much sense to me: both IntPtr and UIntPtr are supposed to represent pointers so their size should match the bitness of the OS - either 32 bits or 64 bits. Since these are not numbers but pointers, their signed numeric values shouldn't matter, only the bits that represent the address they point to. I cannot think of any reason why there would be a difference between these two but this comment made me uncertain.

Is there a specific reason to use UIntPtr instead of IntPtr? According to the documentation:

The IntPtr type is CLS-compliant, while the UIntPtr type is not. Only the IntPtr type is used in the common language runtime. The UIntPtr type is provided mostly to maintain architectural symmetry with the IntPtr type.

This, of course, implies that there's no difference (as long as someone doesn't try to convert the values to integers). So is the above comment from pinvoke.net incorrect?

Edit:

After reading MarkH's answer, I did a bit of checking and found out that .NET applications are not large address aware and can only handle a 2GB virtual address space when compiled in 32-bit mode. (One can use a hack to turn on the large address aware flag but MarkH's answer shows that checks inside the .NET Framework will break things because the address space is assumed to be only 2GB, not 3GB.)

This means that all correct virtual memory addresses a pointer can have (as far as the .NET Framework is concerned) will be between 0x00000000 and 0x7FFFFFFF. When this range is translated to signed int, no values would be negative because the highest bit is not set. This reinforces my belief that there's no difference in using IntPtr vs UIntPtr. Is my reasoning correct?

Fermat2357 pointed out that the above edit is wrong.

Answer

David J picture David J · Nov 1, 2012

UIntPtr and IntPtr are internal implemented as

private unsafe void* m_value;

You are right both simply only managing the bits that represent a address.

The only thing where I can think about an overflow issue is if you try to perform pointer arithmetics. Both classes support adding and subtracting of offsets. But also in this case the binary representation should be ok after such an operation.

From my experience I would also prefer UIntPtr, because I think on a pointer as an unsigned object. But this is not relevant and only my opinion.

It seems not make any difference if you use IntPtr or UIntPtr in your case.

EDIT:

IntPtr is CLS-compliant because there are languages on top of the CLR which not support unsigned.