When studying programming 8085, 8086 and microporcessors in general we always have hexadecimal representation. Its ok that binary numbers are important in computers. But how these hexadecimal numbers are important? Any historical importance?
It would be nice if someone point to some historical papers also.
EDIT:
How computers handle hexadecimal numbers? For example what happens in 8085 when a hexadecimal number is given as input?
Hexadecimal has a closer visual mapping to the various bytes used to store a number than decimal does.
For example, you can tell from the hexadecimal number 0x12345678
that the most significant byte will hold 0x12
and the least significant byte will hold 0x78
. The decimal equivalent of that, 305419896
, tells you nothing.
From a historical perspective, it's worth mentioning that octal was more commonly used when working with certain older computers that employed a different number of bits per word than modern 16/32-bit computers. From the Wikipedia article on octal:
Octal became widely used in computing when systems such as the PDP-8, ICL 1900 and IBM mainframes employed 12-bit, 24-bit or 36-bit words. Octal was an ideal abbreviation of binary for these machines because their word size is divisible by three
As for how computers handle hexadecimal numbers, by the time the computer is dealing with it, the original base used to input the number is completely irrelevant. The computer is just dealing with bits and bytes.