Why binary and not ternary computing?

ojblass picture ojblass · Apr 19, 2009 · Viewed 58.6k times · Source

Isn't a three state object immedately capable of holding more information and handling larger values? I know that processors currently use massive nets of XOR gates and that would need to be reworked.

Since we are at 64 bit (we can represent 2^63 possible states) computing the equivalent ternary generation could support number with 30 more tens places log(3^63-2^63).

I imagine it is as easy to detect the potential difference between +1 and 0 as it is between -1 and 0.

Would some compexity of the hardware, power consumption, or chip density offset any gains in storage and computing power?

Answer

starblue picture starblue · Apr 19, 2009
  • It is much harder to build components that use more than two states/levels/whatever. For example, the transistors used in logic are either closed and don't conduct at all, or wide open. Having them half open would require much more precision and use extra power. Nevertheless, sometimes more states are used for packing more data, but rarely (e.g. modern NAND flash memory, modulation in modems).

  • If you use more than two states you need to be compatible to binary, because the rest of the world uses it. Three is out because the conversion to binary would require expensive multiplication or division with remainder. Instead you go directly to four or a higher power of two.

These are practical reasons why it is not done, but mathematically it is perfectly possible to build a computer on ternary logic.