IME, there's one big thing that often keeps my programs from being unaffected by byte order: wanting to quickly splat data structures into and out of files, pipes, and sockets, without having to encode or decode each element one-by-one. The only real way to make this endian-independent is to have byte-swapping accessors for everything when it's ultimately produced or consumed, but adding all the code for that is very tedious in most languages. One can argue that handling endianness is the responsible thing to do, but it just doesn't seem worthwhile when I practially know that no one will ever run my code on a big-endian processor.
This is functionally identical to the author's example - the file has a defined byte order and you have a choice of doing byte swapping or just explicitly writing out the bytes in the defined order. The author is saying your goal of avoiding "having to encode or decode each element one-by-one" a misguided optimization.
Byte swapping is equivalent to needing to do encoding and decoding. Is it not?
The benefit is that you'd only have to do it for the parts of the data that are actively manipulated, which might be far less than the entirety of the data structure. Also, you can easily forward a copy elsewhere in the original format.
But if you know you're not going to have endianness problems, you can just skip that step entirely.
I think the article's author would say that loading data "without having to encode or decode each element" is premature optimization and more likely to have bugs. I tend to agree.
The optimisation the parent is referring to is development time/effort; if the alternative to dumping a structure to a file is to hand roll your serialiser/deserialiser, that's a slower & probably more error prone approach (depending on the context).
The article makes an argument that the hand-rolled solution is less buggy, if you approach it the right way.
For complicated data structures, it's probably best to use a library that serializes to a common standard. (For example, protocol buffers or JSON.)
But I think the article assumes you don't get to choose the protocol, so it probably has to be hand-written by someone.
Maybe if you hand-roll the struct layout, but if you use something like flatbuffers I doubt you would see many more bugs - and flatbuffers will take care of endian swaps as necessary without you needing to think about it.
Depends what you’re doing. I have a side project that generates CSVs in the GB range. It keeps everything in bytes because encode/decode is a lot of overhead in loops when you’re hitting them millions of times.
Not once you start getting into the range of hundreds of megabytes or more, which accounts for most situations where I'd use a binary format in the first place.
By the time I’m putting hundreds of MB somewhere, I want a defined format, not whatever the compiler happens to generate for this particular build of my software. There are plenty of nice ways to do this.
Struct layouts in C are defined by the platform's ABI, and in every sane platform, that just looks like "lay out each element in order, adding the smallest-possible amount of padding to satisfy alignment requirements" [0]. There are presumably oddball platforms which do something else, but good luck actually finding one that has lots of RAM, an ordinary filesystem, and so on. (Within the realm of sane platforms, there are a few alignment oddities, but it's always safe to build packed structs as if each type is aligned to its size.)
Struct layouts for FFI in other languages tend to follow the C convention and/or allow explicit field offsets to be specified. Regardless, if you use the proper language constructs, it's nowhere near as undefined as "whatever the compiler happens to generate".
[0] https://www.gnu.org/software/c-intro-and-ref/manual/html_nod...
If you are using C/C++ for any new app, there is a possibility you are writing code that has a performance requirement.
- mmap/io_uring/drivers and additional "zero-copy" code implementations require consideration about byte order.
- filesystems, databases, network applications can be high throughput and will certainly benefit from being zero-copy (with benefits anywhere from +1% to +2000% in performance.)
This is absolutely not "premature optimization." If you're a C/C++ engineer, you should know off the top of your head how many cycles syscalls & memcpys cost. (Spoiler: They're slow.) You should evaluate your performance requirements and decide if you need to eliminate that overhead. For certain applications, if you do not meet the performance requirements, you cannot ship.
Once upon the time I became the de facto admin for a VxWorks box because my code was to be the bottleneck on a task with a min throughput defined in the requirements and we weren't hitting the numbers. I ended up having to KVM into it and run benchmarks in vivo, which meant understanding the command line which I'd never seen before.
People were understandably concerned that we had fucked up in the feasibility phase of the project. Lots of people get themselves in trouble this way, and this was a 9 figure piece of hardware sitting idle while our app picked its nose crunching data, if we didn't finish our work on time during maintenance windows.
But I was on my longest hot streak of accurate perf estimates in my career and this one was not going to be my Icarus moment. It ended being tweaks needed from the compiler writer and from Wind River (DMA problem). I had to spend a lot of social capital on all of this, especially the Wind River conference call (which took ten minutes for them to come around to my suggestion for a fix that they shipped us in a week. After months and months of begging for a conference call).
100% on the business implications. Although a lot of engineers never have to touch it, DMA (& zero-copy) implementations are foundational to the performance of modern day computers that we sometimes take for granted.
The hard drive was running so slow I exclaimed “it’s almost like this drive is running in PATA mode.”
It was. Motherboard and CPU were newer than the VxWorks version and it was running in compatibility mode. We treated it like the previous hardware revision it was backward compatible with and 30% more throughput like magic. Exactly as predicted.
A memcpy should not be slow. It should be nearly as fast as generic memory copying can be. Most of the time you shouldn't even hit the actual function, but instead a bit of code generated by the compiler that does exactly the copy you need.
memcpy gets weird with pointer aliasing as well. There's a slower path if the pointers can end up overlapping, and you either have to prove it programatically like Java does, do the defensive copy, or YOLO it and hope.
memcpy is only defined for non-overlapping memory regions (otherwise you should use memmove), but many platforms use memmove for memcpy anyway to avoid breaking user programs in unpredictable ways. Apparently this has also led to some arguments and glibc version incompatibilities (https://www.win.tue.nl/~aeb/linux/misc/gcc-semibug.html).
I don’t know why I said “path”, I meant instruction.
Any implementation of an algorithm is slow when your baseline is not performing the computation at all.
The fastest line of code is no line at all.‡
[‡]: Unless it's some weird architectural fluke with pipelining.
Haha, it's zero-copy! I never said it was "faster-copy" (-:
Apples and oranges. They're very different things, even if there's some overlap in use cases.
Yeah ive always been blown away by how fast memcpy is. I'm guessing the OP is from a different world of engineering than I am.
The compiler can optimize this. See https://gcc.godbolt.org/z/hxW7hhrd7
#include <cstdint>
uint32_t read_le_uint32(const uint8_t* p)
{
return p[0] | (p[1] << 8) | (p[2] << 16) | (p[3] << 24);
}
ends up as
read_le_uint32(unsigned char const*):
mov eax, dword ptr [rdi]
ret
This works with Clang and gcc on x86_64 (but not with MSVC).
The purpose of zero-copy can be to avoid deserialization at all. All you do to deserialize is:
uint8_t *buf = ...; struct example_payload *payload = (struct example_payload *) buf;
That's why when you access the variables you need to byte order swap. This is absolutely not portable, I agree. I also agree that it is error-prone. However, it is the reality of a lot of performance critical software.
Yeah, I’ve occasionally had to manually special case big/little endian code, but most of the time you can write the generic code and the optimizer will take care of it. Unless you’re doing something very complicated it’s a quite trivial optimization to perform.
My uses of mmap have only over been memoization. Where I didn't care about byte order, and instead just assumed the files wouldn't be portable between any two computers.
If you are going zero copy, you either need to give up on any kind of portability, or delve deep into compiler flags to standardize struct layout.
maybe i'm missing something because I don't code network drivers but wouldn't it be something like...
if it's little endian (on the wire), the process would be like:
(value[0] | (value[1] << 8) | (value[2] << 16) | (value[3] << 24))
and in big endian (again, on the wire, architecture endianness irrelevant) it would be the same thing with the indices reversed, where "value" is the 4 bytes read in off the wire?
The performance would be absolutely horrendous if network drivers were programmed this way. DMA (Direct Memory Access) is all about avoiding deserialization and copies of the data.
> memcpy slow
Uh...
Compared to doing nothing, yes it's "slow."
TCP/IP is big-endian, which is likely the largest footprint for these concerns.
"htonl, htons, ntohl, ntohs - convert values between host and network byte order"
The cheapest big-endian modern device is a Raspberry Pi running a NetBSD "eb" release, for those who want to test their code.
https://wiki.netbsd.org/ports/evbarm/
Yeah, you deal with order when marshaling stuff on the wire, I haven’t dealt with it much for years, but doing embedded software that used to be in my face a lot.
Unless you're dealing with binary data in which case byte order matters very much and if you forget to convert it you're causing a world of pain for someone.
He even has an example where he just pushes the problem off to someone else "if the people at Adobe wrote proper code to encode and decode their files", yeah hope they weren't ignoring byte order issues.
The article's point is that the machine's byte order doesn't matter. The byte order of a data stream of course matters, but they show a way to load a binary data stream without worrying about the machine's byte order.
That key insight is that people shouldn't try to optimize the case where the data stream's byte order happens to match the machine's byte order. That's both premature optimization and a recipe for bugs. Just don't worry about that case.
Load binary data one byte at a time and use shifts and ORs to compose the larger unit based on the data's byte order. That's 100% portable without any #ifdefs for the machine's byte order.
[deleted]
Really except for the networking (including say Bluetooth) nobody is big endian anymore. So how about just don't leak that thing from the network layer.
And do not define any data format to be big endian anymore. Deine it as little endian (do not leave it undefined) and everyone will be happy.
I think both SMB and 9p (Plan 9 resource sharing/file system protocol) are little endian.
So it's not even all networking... and "network byte order" will mess you up.
This is a reasonable way to do things, and I've used it before. However I just used Zig's method here, and like it a lot: https://ziglang.org/documentation/master/std/#std.io.Reader....
Given a reader (file, network, buffers can all be turned into readers), you can call readInt. It takes the type you want, and the endianess of the encoding. It's easy to write, self documents, and it's highly efficient.
If we're talking about a single int, the way you do it doesn't matter, just wrap it up in a readInt function.
But if we're talking about a struct or an array, if you're byte-order aware you can do things like memcpy the whole thing around that you couldn't do by assembling it out of individual readInt calls.
It's probably faster to memcpy the thing then "swap" each element (the swaps may be no-ops under the hood). This should be portable and fast.
Yeah it's not a hard thing to do, but I think Zig does it very cleanly.
As for reading structs, that's supported too: https://ziglang.org/documentation/master/std/#std.io.Reader....
readStructEndian will read the struct into memory, and perform the relevant byte swaps if the machine's endianness doesn't match the data format. No need to manually specify how a struct is supposed to perform the byte swap, that's all handled automatically (and efficiently) by comptime.
Comptime also means that when endianness matches, using these functions is a no-op. I expect you know this, but those new to the language may not: the endianness check in the implementation happens when compiling, not when decoding structs.
It's instructive how different in feel this solution is to the traditional #ifdefs which the Fine Article dislikes enough to write an entire (IMHO very confused and opaque) broadside against. The preprocessor is a second language superimposed over the first, which is friction, and the author would rather trust the compiler (despite explicitly noting that MSVC cannot be so trusted!) to optimize out a non-obvious solution using shifts, rather than risk the bugs which come with preprocessor-driven conditional compilation.
By contrast, if you don't know Zig, it's not all that obvious that the little-to-little case is a no-op on little-endian systems. If you do know Zig it is obvious, and it's also boring, in a good way: idiomatic Zig code does a lot of small things at compile time, using, for the most part, the same language as runtime code.
As a games coder I was glad when the xbox 360 / ps3 era came to an end; getting big endian clients talking to little endian servers was an endless source of bugs.
The other case where it matters is SIMD instructions where you're serializing or deserializing multiple fields at once, but the SIMD operations are usually architecture specific to begin with and so if you shuffle bytes into and out of the native packed formats it will be specific to the endianness of the native packed format, and then you can forget about byte order outside of those shuffle transformations.
What he said: if you read bytes with some byte order, you compose them yourself correctly, no byte swapping but just reading byte for byte and convert them to the number value you need. The architecture byte order is implicit as long as you use the architecture's tools to convert the bytes.
Rust, for example has from_be_bytes(), from_le_bytes() and from_ne_bytes() methods for the number primitives u16, i16, u32, and so on. They all take a byte array of the correct length and interpret them as big, little and native endian and convert them to the number.
The first two methods work fine on all architectures, and that's what this article is about.
The third method, however, is architecture-dependent and should not be used for network data, because it would work differently and that's what you don't want. In fact, let me cite this part from the documentation. It's very polite but true.
> As the target platform’s native endianness is used, portable code likely wants to use from_be_bytes or from_le_bytes, as appropriate instead.
I don't like these ambiguous titles. From the title I thought I was going to read that byte order doesn't matter when in fact the title should be "a computer's byte order is irrelevant to high-level languages". At least, state the fallacy in unambiguous terms one sentence right away. In any case, was an interesting read.
I came here to write the same. I learned a thing or two about how higher level languages work.
Two areas I find it does matter: Assembly language where bytes are parsed or sorted, or transformed in some way by code that writes words
, and
binary file representations when written on a little endian machine and read by a big endian machine.
> If you wrote it on a PC and tried to read it on a Mac, though, it wouldn't work unless back on the PC you checked a button that said you wanted the file to be readable on a Mac. (Why wouldn't you? Seriously, why wouldn't you?)
As a non-SWE, whenever I see checkboxes to enable options that maximize compatibility, I often assume there’s an implicit trade-off, so if it isn’t checked by default, I don’t enable such things unless strictly necessary. I don’t have any solid reason for this, it’s just my intuition. After all, if there were no good reasons not to enable Mac compatibility, why wouldn’t it be the default?
Edit: spelling error with “implicit”
He's right that you shouldn't use ifdefs, but I think a macro like le32toh() is far clearer and more concise than a bunch of shifts and ors.
Also, a lot of comments in this thread have nothing to do with the article and appear to be responses to some invisible strawman.
The byte order matters in all cases where there is i/o, being files, network streams, inter chip communication,... For data that stays on the same processor or for files that are only accessed with the processors of the same endianness, there really is no issue, even when doing bit manipulation.
If Network Byte Order wasn't a thing, we could all just pretend big endian doesn't exist outside of mainframes.
Characters are not necessarily 8 bits. So you need to do a bit more to have true portability.
Unless you’re writing code to decode image file formats.
No, same deal. The article argues that you should write portable code based on the ordered bytes in an external format, as that's guaranteed to be a machine-independent thing (i.e. it's stored on disk in exactly one way). Same is true for image files as 2-byte wchar file as zip files, yada yada.
It's true as far as it goes, but (1) it leans very heavily on the compiler understanding what you're doing and "un-portabilifying" your code when the native byte order matches the file format and (2) it presumes you're working with pickled "file" formats you "stream" in via bytes and not e.g. on memory mapped regions (e.g. network packets!) that want naturally to be inspected/modified in place.
It's fine advice though for the 90% of use cases. The author is correct that people tend to tie themselves into knots needlessly over this stuff.