Overview of cross-architecture portability problems

85 pointsposted 19 hours ago
by todsacerdoti

16 Comments

johnklos

11 hours ago

One of the larger problems is purely social. Some people unnecessarily resist the idea that code can run on something other than x86 (and perhaps now Arm).

Interestingly, some apologists say that maintaining portability in code is a hinderance that costs time and money, as though the profit of a company or the productivity of a programmer will be dramatically affected if they need to program without making assumptions about the underlying architecture. In reality, writing without those assumptions usually makes code better, with fewer edge cases.

It'd be nice if people wouldn't go out of their way to fight against portability, but some people do :(

JodieBenitez

3 minutes ago

> In reality, writing without those assumptions usually makes code better, with fewer edge cases.

Writing code for portability is usually the easy part. Testing your software on multiple architectures is a lot more work though.

gwbas1c

9 hours ago

I argue the opposite: It's important to know the purpose of the program, including the computer that the program will run on.

I can understand polite disagreements about target architectures: You might write a windows desktop program and disagree with someone if you should support 32-bit or be 64-bit only.

BUT: Outside of x64 and ARM, your program (Windows Desktop) is highly unlikely to run on any other instruction set.

The reality is that it's very time consuming and labor-intense to ship correctly-working software. Choosing and limiting your target architecture is an important part of that decision. Either you increase development cost/time (and risk not getting a return on your investment,) or you ship something buggy and risk your reputation / support costs.

johnklos

3 hours ago

> windows

Everything you write makes sense in the context of Windows, but only for Windows. That's certainly not true for open source non-Windows specific software.

TZubiri

9 hours ago

Also, develop on your target architecture. I see too many people trying to use their arm macs to develop locally. Just ssh into an x86 thing

rqtwteye

9 hours ago

I am for portability but if you decide to go that route, you need to do a ton of testing on different platforms to make sure it really works. That definitely costs time and effort. There are a lot of ways to do it wrong.

denotational

12 hours ago

Missed my favourite one: differences in the memory consistency model.

If you’re using stdatomic and the like correctly, then the library and compiler writers have your back, but if you aren’t (e.g. using relaxed ordering when acquire/release is required), or if you’re rolling your own synchronisation primitives (don’t do this unless you know what you’re doing!), then you’re going to have a bad day.

hanikesn

13 hours ago

Looks like non 4k page sizes are missing, which tripped out some software running on asahi Linux.

saagarjha

11 hours ago

Most software doesn’t really care. This list only has common issues (though the issues it goes over seem more like problems from 2010…)

magicalhippo

10 hours ago

One I recall, working on a C++ program that we distributed to Windows, Linux and PowerPC OSX at the time, was how some platforms had memory zero-initialized by the OS memory manager, and some didn't.

Our code didn't mean to take advantage of this, but it sometimes meant buggy code would appear fine on one platform as pointers in structures would be zeroed out, but crash on others where they weren't.

As I recall, it was mostly that and the endianess that caused the most grief. Not that there was many issues at all since we used Qt and Boost...

Archit3ch

11 hours ago

The fun of discovering size_t is defined differently on Apple Silicon.

benchloftbrunch

3 hours ago

Something not mentioned is that on Windows `long` is always 32 bits (same size as `int`), even on 64-bit architectures.

malkia

3 hours ago

Is it still the case that wasm is still 32-bit?

AStonesThrow

5 hours ago

In my misspent college years, I was coding game servers in C. A new iteration came about, and the project lead had coded it on a VAX/BSD system, where I had no access.

Under whatever C implementation on VAX/BSD, a NULL pointer dereference returned "0". Yep, you read that right. There was no segfault, no error condition, just a simple nul-for-null!

This was all fine and dandy until he handed it over for me to port to SunOS, Ultrix, NeXT Mach BSD, etc. (Interesting times!)

I honestly didn't find a second implementation or architecture where NULL-dereference was OK. Whatever the standards at the time, we were on the cusp of ANSI-compliance and everyone was adopting gcc. Segmentation faults were "handled" by immediately kicking off 1-255 players, and crashing the server. Not a good user experience!

So my first debugging task, and it went on a long time, was to find every pointer dereference (and it was nul-terminated strings) and wrap them in a conditional "if (ptr != NULL) { ... }"

At the time, I had considered it crazy/lazy to omit those in the first place and code on such an assumption. But C was so cozy with the hardware, and we were just kids. And there was the genesis for the cynical expression "All The World's A VAX!"