Start Bit vs Start Byte

Steve picture Steve · Nov 9, 2008 · Viewed 10.2k times · Source

I know in a lot of asynchronous communication, the packet begins starts with a start bit.

But a start bit is just a 1 or 0. How do you differentiate a start bit from the end bit from the last packet?

Ex. If I choose my start bit to be 0 and my end bit to be 1. and I receive 0 (data stream A) 1 0 (data stream B) 1, what's there to stop me from assuming there is a data stream C which contains the same contents of "(data stream A) 1 0 (data stream B)" ?

Isn't it more convenient to have a start BYTE and then check the data stream for that combination of bits? That will reduce the possibility of a confusing between the start/end bit.

Answer

Adam Liss picture Adam Liss · Nov 9, 2008

Great question! Most asynchronous communication also specifies a stop bit, which is the complement of the start bit, ensuring each new symbol begins with a stop-to-start transition.

Example: let's transmit the characters ABC, which are ASCII 65, 66, and 67:

A = 65 = 0x41 = 0100 0001
B = 66 = 0x42 = 0100 0010
C = 67 = 0x43 = 0100 0011

Let's also assume (arbitrarily) that the start bit is 0 and the stop bit is 1, and the data is transmitted from MSB to LSB. The transmitter will be in the stop (1) state when no data is transmitted. So the receiver might see this:

Data:   ....1111 0010000011 111 0010000101 0010000111 11111....
         (quiet) ^   A    $     ^    B   $ ^    C   $ (quiet)

With apologies for the ASCII graphics, the data consists of a series of stop (1) bits while the channel is idle. When the transmitter is ready to send a character, it sends a start (0) bit (marked with ^), followed by the character code, and ending with a stop (1) bit (marked with $). It continues to send stop bits until the next character is transmitted, beginning with another start bit.

The reason we use start bits instead of bytes is efficiency. The scheme above requires 10 bits (1start + 8data + 1stop) to transmit 8 bits of data, resulting in an overhead of (10 - 8) / 8 = 1/4 = 25%. If we used start and stop bytes, we'd need to transmit 3 bytes for each byte of data, which would be an overhead of (3 - 1)/1 = 2 = 200%. If the start, data, and stop bytes were each 8 bits, we'd have to transmit 24 bits instead of 10 for each character, so it would take almost 2 1/2 times as long to send the data!