Why does a byte only range from 0 to 255?
Strictly speaking, the term "byte" can actually refer to a unit with other than 256 values. It's just that that's the almost universal size. From Wikipedia:
Historically, a byte was the number of bits used to encode a single character of text in a computer and it is for this reason the basic addressable element in many computer architectures.
The size of the byte has historically been hardware dependent and no definitive standards exist that mandate the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. Many types of applications use variables representable in eight or fewer bits, and processor designers optimize for this common usage. The popularity of major commercial computing architectures have aided in the ubiquitous acceptance of the 8-bit size. The term octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated with the term byte.
Ironically, these days the size of "a single character" is no longer consider a single byte in most cases... most commonly, the idea of a "character" is associated with Unicode, where characters can be represented in a number of different formats, but are typically either 16 bits or 32.
It would be amusing for a system which used UCS-4/UTF-32 (the direct 32-bit representation of Unicode) to designate 32 bits as a byte. The confusion caused would be spectacular.
However, assuming we take "byte" as synonymous with "octet", there are eight independent bits, each of which can be either on or off, true or false, 1 or 0, however you wish to think of it. That leads to 256 possible values, which are typically numbered 0 to 255. (That's not always the case though. For example, the designers of Java unfortunately decided to treat bytes as signed integers in the range -128 to 127.)