How do computers translate everything to binary? When they see a binary code, how do they know if it represents a number or a word or an instruction?

user50746 picture user50746 · Oct 8, 2014 · Viewed 16.8k times · Source

I know how computers translate numbers to binary. But what I don't understand is that I've heard that computers translate everything (words, instructions, ...) to binary, not just numbers. How is this possible?

Could you show me some examples? Like how does a computer translate the letter "A" to binary?

And when computers see a binary code, how can they know if that long string of 0s and 1s represents a number or a word or an instruction?

.

Exemple:

Let's say that a computer programmer encoded the letter "Z" so that it translates to this binary string: 11011001111011010111

So when the computer will encounter this binary string, it will translate it to the letter "Z".

But what happens when we ask this computer "what is the product of 709 by 1259?"

The computer would answer us "892631". But that number, when translated to binary, is 11011001111011010111.

So how would it make a difference between "Z" and "892631"?

.

Please note that I don't know much about computer science, so please explain everything in simple terms.

Answer

Guffa picture Guffa · Oct 9, 2014

Computers doesn't actually translate anything to binary, it's all binary from the start, and the computer never knows anything other than binary.

The character A stored in memory would be 01000001, and the computer doesn't see that as anything but a binary number. When we ask the computer to display that number as a character on the screen, it will look up the graphical representation for it in a font definition to find some other binary numbers to send to the screen hardware.

For example if the computer was an eight bit Atari, it would find eight binary values to represent the character A on the screen:

00000000
00011000
00111100
01100110
01100110
01111110
01100110
00000000

As you can see, the binary values would then translate to dark and bright pixels when the graphics hardware would draw it on the screen.

Similarly, whatever we do with the numbers in the computer, it's all ways of moving binary values around, doing calculations on binary values, and translating them to other binary values.

If you for example take the character code for A and want to display it as a decimal number, the computer would calculate that the decimal representation of the number is the digits 6 (110) and 5 (101), translate that to the character 6 (00110110) and the character 5 (00110101), and then translate those into their graphical representation.