3

I have a hard time understanding the use of MSB (Most significant bit) vs. LSB (Least significant bit) and why they're used at all. It is my understanding that it has to do with Endianness, but I cannot get my head around it.

Basically: If I read a specification on X and they specify that the data needs to be read LSB or MSB - ok I can do that - but why am I doing it.

Why don't we just send all data MSB?

4 Answers4

12

Endianness for binary (base 2) is no different than endianness for any other base.

Without a specification telling you which side the most significant and least significant decimals were, you wouldn't know whether

1234

is one-thousand-two-hundred-and-thirty-four or four-thousand-three-hundred-and-twenty-one.

Note for additional curiosity that "twenty-one" is MSD whereas "nineteen" is LSD, so it's certainly not that universally obvious even within the same community which one is the right one.

For dates, there are three different conventions (using Alan Turing's birthday as example):

  • least significant component first (e.g. in Germany): 23.06.12
  • most significant component first (e.g. ISO): 12-06-23
  • complete and utter confusion (US): 06/23/12

Again, you have to pick some order, and you have to communicate which order you picked, and there is no obviously "right" order. (I would argue though, that the US date format is obviously wrong :-D .)

Jörg W Mittag
  • 104,619
3

When a byte is serialized into a stream of bits and transmitted serially, it becomes important to know whether it's transmitted LSbit-first or MSbit-first. The transmitter can send the bits in either order, and the contract whether it's LSbit-first or MSbit-first is established in the spec (or datasheet). For example, the receiver receives this:

time: 01234567
bit:  01000000

If the transmitter was sending LSbit-first, then the value is 0x02. If the transmitter was sending MSbit-first, then the value is 0x40. The receive has to know which one it is.

Nick Alexeev
  • 2,532
1

Bit order and byte order are two separate things, but they may affect your preferences. Say you have a little-endian machine, and you want to transmit a 32-bit number. Also transmitting the bits LSB-first means overall bit order for the entire int (with lsb numbered 0) is:

0 1 2 3 4 5 6 7 8 9 10 11 12 ... 31

If you transmitted MSB-first, the overall bit order for the entire int would be:

7 6 5 4 3 2 1 0 15 14 13 11 ... 24

Looking at this signal on an oscilloscope just got more difficult, and designing shift registers in hardware just got a little more tricky.

The reverse occurs for big-endian machines. So endianness doesn't dictate bit order, but it sure does make one bit order easier to work with.

As far as preferring one endianness over another, it's more than picking one at random. Big endian is easier to understand conceptually, but little endian means you don't have to offset the address in order to treat a byte as a word.

Karl Bielefeldt
  • 148,830
1

MSB and LSB can be thought of in terms of numeric properties of bit sequences. For example, during addition, the carries flow from the addition of two LSB's toward the next higher bit. The LSB itself receives no carry because it starts the addition; whereas all the other bits get a carry from the next least significant bit position. Overflow is when a carry (of value 1) happens from the MSB, because there are no more bits (in the byte or word size) left to carry over to. The MSB is also considered the sign bit for signed data types: if the MSB is 1 the value is negative, if 0 the value is positive (or zero).

Endianess is a matter of the storage (or transmission) order for bits within a byte and also more significantly (as it turns out) for the order of bytes within a word or long word. These differences can be seen when data is moved from one system to another. Fortunately bit order within a byte is standardized on hard drives and networks, so we don't have to worry about bit endianess. However, byte order within a word is different for some processors than others, and bytes become swapped when changing architectures; as this causes problems it requires mitigation (e.g. software has to handle it). Little endian stores least significant bytes first followed by more significant bytes in higher storage or packet address order, whereas big endian is the opposite.

Erik Eidt
  • 34,819