I am trying to understand why a DWORD value is often described in Hexadecimal on MSDN.
The reason why I am analyzing this is because I am trying to understand fundamentally why all these different number data types exist. A local mentor alluded to me that the creation of DWORD and other Microsoft types had something to do with the evolution of processors. This gives meaning and context to my understanding of these data types. I would like more context and background.
Either way, I could use some explanation or some resources on how to remember the difference between DWORD, unsigned integers, bytes, bits, WORD, etc.
In summary, my questions are: 1) Why are DWORDs represented in Hex? 2) Can you provide resources on the differences between numerical data types and why they were created?
Everything within a computer is a bunch of 0s and 1s. But writing an entire DWORD in binary is quite tedious:
00000000 11111111 00000000 11111111
to save space and improve readability, we like to write it in a shorter form. Decimal is what we're most familiar with, but doesn't map well to binary. Octal and Hexadecimal map quite conveniently, lining up exactly with the binary bits:
// each octal digit is exactly 3 binary digits
01 010 100 binary = 124 octal
// each hexadecimal digit is exactly 4 binary digits
0101 0100 binary = 54 hexadecimal
Since hex lines up very nicely with 8-bit Bytes (2 hex digits make a Byte), the notation stuck, and that's what gets used most. It's easier to read, easier to understand, easier to line up when messing around with bitmasks.
The normal shorthand for identifying which base is being used:
1234543 = decimal
01234543 = octal (leading zero)
0x1234543 = hexadecimal (starts with 0x)
As for your question about BYTE, WORD, DWORD, etc...
Computers started with a bit. Only 1 or 0. He had a cameo in the original Tron.
Bytes are 8 bits long (well, once upon a time there were 7-bit bytes, but we can ignore those). This allows you to have a number from 0-255, or a signed number from -128 to 127. Better than just 1/0, but still limited. You may have heard references to "8-bit gaming". This is what we refer to. The system was built around Bytes.
Then computers grew to have 16-bit registers. This is 2 Bytes, and became known as a WORD (no, I don't know why). Now, numbers could be 0-65535 or -32768 to 32767.
We continued to want more power, and computers were expanded to 32-bit registers. 4 Bytes, 2 Words, also known as a DWORD (double-word). To this day, you can look in "C:\Windows" and see a directory for "system" (old 16-bit pieces) and "system32" (new 32-bit components).
Then came the QWORD (quad-word). 4 WORDS, 8 Bytes, 64 bits. Ever hear of the Nintendo-64? That's where the name came from. Modern architecture is now here. The internals of the cpu contain 64-bit registers. You can generally run a 32- or 64-bit operating system on such cpus.
That covers Bit, Byte, Word, Dword. Those are raw types, and are used often for flags, bitmasks, etc. If you want to hold an actual number, it's best to use signed/unsigned integer, long, etc.
I didn't cover floating point numbers, but hopefully this helps with the general idea.