2

From what I've read (other questions on the site, etc...) the vast majority of desktop systems have little-endian architectures, and Windows doesn't even support big-endian. Now I'm wondering if it's even worth the extra effort to deal with endianness for the small number of big-endian desktop systems (if there even are any) out there. The application in question is 64-bit only and portable (Windows and Linux) in case that's relevant.

A practical benefit of going exclusively little-endian would be saving on htonl/ntohl conversions for network communication, allowing raw binary data to be sent directly (from the application to other instances of itself on a remote machine). The performance difference would be negligible, but reducing code complexity is quite attractive.

Is there a compelling reason to support big-endian on desktops? Are big-endian systems even being used for desktops these days?

Wingblade
  • 207

3 Answers3

4

Note that 'network byte order' is big endian, so if you are transmitting any standardized structures, you will need to do that conversion.

Generally, most people avoid this issue by transmitting data in a textual form (like XML or JSON).

There are no major, very popular chipsets using big endian, so it may not be a pragmatic problem for you. But things have a way of changing. And code has a way of getting lifted from one place, and used in another.

I'd write the code the conventional way, supporting network byte order, as this will be the least surprising thing to do. As you say, the performance costs are really minimal. And consider using a textual format like JSON. That makes this entire issue go away, and has other benefits as well (easier to read traffic dumps by people, and easier to leverage other tools expecting data in JSON or XML format).

Lewis Pringle
  • 2,975
  • 1
  • 11
  • 15
2

Don't just assume things in your code. You never know how long it will be in use.

I personally know products in use since the early 1990s, moving from 68000-based MacOS via PA-RISC HP-UX, then x86 Linux to currently x86 Windows. There were quite some changes of architecture, endianness, filename syntax etc. during that timespan.

So if your code willingly doesn't support e.g. big-endian CPUs, write a unit test that fails if run on such a machine. Then in 10 years of time, when your future colleages move your code to some new architecture, they get a clear indication why that won't work out-of-the-box.

0

In the last four years I haven’t seen any code that depended on endianness, or could have been made simpler or faster by making assumptions about it.

gnasher729
  • 49,096