e2le
6 days ago
Seems sensible. Only 2.6% of users (with telemetry enabled) are using 32-bit Windows while 6.4% are using 32-bit Firefox on 64-bit Windows[0]. 32-bit Linux might see more use however isn't included in the stats, only 5.3% of users are running Linux[1] and I doubt many enable telemetry.
Maybe they could also drop support for older x86_64 CPU's, releasing more optimised builds. Most Linux distributions are increasing their baseline to x84-64-v2 or higher, most Firefox users (>90%)[0] seem to meet at least x84-64-v2 requirements.
[0]: https://firefoxgraphics.github.io/telemetry/#view=system
[1]: https://firefoxgraphics.github.io/telemetry/#view=general
pigeons
3 days ago
That seems like a lot of people to abandon! Perhaps the right financial decision, I don't know, but that seems like a significant number of users.
pavon
3 days ago
They aren't ending support for 32-bit Windows. If the ratio of 32/64 bit users on Linux matched those on Windows, then this would affect 0.5% of their users.
zozbot234
2 days ago
Does this mean that 32-bit Linux users will be able to run more up-to-date versions using Wine?
gweinberg
2 days ago
Abandon is too strong a word. I imagine most people who are still using 32 bit operating systems aren't too concerned about getting the very latest version of firefox either.
yreg
2 days ago
They might not be concerned, but websites using new standards will slowly start to break for them.
ryan-ca
2 days ago
Polyfills are standard for websites to be compatible with older browsers.
01HNNWZ0MV43FF
2 days ago
It will all break in time.
These things that look like institutions, that look like bricks carved from granite, are just spinning plates that have been spinning for a few years.
When I fight glibc dependency hell across Ubuntu 22 and Ubuntu 24, I sympathize with Firefox choosing to spin the 64-bit plates and not the 32-bit plates.
kstrauser
2 days ago
If I were a product decision maker, I’d be ok with that. It’d have to be a very unusual niche to make it worth the engineering effort to support customers who only run decades-old hardware.
Employees: “We want to use new feature X.”
Boss: “Sorry, but that isn’t available for our wealthy customers who are stuck on Eee PCs.”
Nah.
metalliqaz
3 days ago
within those numbers are people who don't really have a preference one way or another, and just didn't bother to upgrade. I have to imagine that the group of people that must use 32-bit and need modern features is vanishingly small.
pdntspa
3 days ago
I would bet a lot of those folks are running embedded linux environments. Kiosks, industrial control, signage, NUCs, etc. I know that as of about 6 years ago I was still working with brand-new 32-bit celeron systems for embedded applications. (Though those CPUs were EOL'd and we were transitioning to 64-bit)
FirmwareBurner
3 days ago
6 years ago was 2019. You were working in 2019 with "brand new 32-bit-only Celerons" which had no 64 bit support?!
Nah mate, something doesn't add up. I can't buy this. Even the cheapest Atoms had 64bit support much earlier than that and Atoms were lower tier silicone than Celeron so you can't tell me Intel had brand new 32 bit only Celerons in 2019.
My Google-fu found the last 32-bit only chips intel shipped were the Intel Quark embedded SoCs EoL in 2015. So what you're saying doesn't pass the smell test.
pdntspa
3 days ago
May have been 2018. Definitely not that long before covid. Suppliers in the embedded space will stockpile EOL parts for desperate integrators such as ourselves, and can continue to supply new units for years after they're discontinued. The product needed a custom linux kernel compile and it took a while to get that working on 64-bit and we had to ship new units. Yes the COGS get ridiculous.
FirmwareBurner
3 days ago
Sure, but in that case it probably wasn't a Celeron, and there's industrial players still keeping 386 systems alive for one reason or another, but it feels in bad faith to call it "brand new" when it's actually "~10 year old, new old stock". Do you know what I mean?
pdntspa
3 days ago
I don't understand what the distinction/problem is. It's a new-in-box a la "brand new". You're really getting tripped up over semantics?
My point is this stuff is still in play in a lot of places.
FirmwareBurner
3 days ago
Maybe it's the language barrier since I'm not a native English speaker but where I'm from the phrase "brand new" means something different, it means something that just came onto the market very recently, not something that came on the market 10+ years ago but was never opened from the packaging. That's no longer means "brand new", it means "old but never used/opened". Very different things.
So when you tell me "brand new 32 bit Celeron" it is understood as "just came onto the market".
Am I right or wrong with this understanding?
>My point is this stuff is still in play in a lot of places.
I spent ~15 years in embedded and can't concur on the "still in play in a lot of places" part, but I'm not denying some users can't still exists out there, however I'm sure we can probably count them on very few fingers since Intel's 32 bit Embedded chips never had much traction to begin with.
pdntspa
2 days ago
I've never understood 'brand new' to imply anything about freshness. But according to Mirriam-Webster it means both in different, but very similar, contexts.
mpol
2 days ago
The distinction in English might be more in "new" versus "used". And yes, that is inconsistent, you would think "new" versus "old" and "used" versus "unused". But alas :)
gwbas1c
2 days ago
The term in this case is "new old stock:"
As in, a product that was manufactured, kept in its original packaging, and "unopened and unused".
(Although there's some allowances for the vendor to test because you don't want to buy something DOA.)
(Although I won't get too angry for someone saying "brand new." "New old stock" is kind of an obscure term that you don't come across unless you're the kind of person who cares about that kind of thing.)
kstrauser
3 days ago
I think that’s the right way to look at it. If you want a 32 bit system to play with as a hobby, you know you’re going to bump into roadblocks. And if you’re using a 20 year old system for non-hobby stuff, you already know there’s a lot of things that don’t work anymore.
FirmwareBurner
3 days ago
>And if you’re using a 20 year old system for non-hobby stuff, you already know there’s a lot of things that don’t work anymore.
Mate, 20 year old system means a Pentium 4 Prescott and Athlon 64, both of which had 64 bit support. And in another year after we already had dual core 64 bit CPUs.
So if you're stuck on 32 bit CPUs then your system is even older than 20 years.
kstrauser
2 days ago
That was a transitional time. Intel Core CPUs launched as 32 bit in 2006, and the first Intel Macs around then used them. OS X Lion dropped support for 32 bit Intel in 2011.
So you could very well have bought a decent quality 32 bit system after 2005, although the writing was on the wall long before then.
zokier
2 days ago
Maybe more relevantly, the first Atom CPUs were 32-bit only and were used in the popular netbooks (eeepc etc) during 2008-2010ish era.
FirmwareBurner
2 days ago
>So you could very well have bought a decent quality 32 bit system after 2005, although the writing was on the wall long before then.
Not really. With the launch of Athlon 64, AMD basically replaced all their 32bit CPUs lineups with that new arch, and not kept them along much longer as a lower tier part. By 2005 I expect 90% of new PCs sold were already 64 bit ready.
axiolite
2 days ago
> By 2005 I expect 90% of new PCs sold were already 64 bit ready.
You're several years off:
"The FIRST processor to implement Intel 64 was the multi-socket processor Xeon code-named Nocona in June 2004. In contrast, the initial Prescott chips (February 2004) did not enable this feature."
"The first Intel mobile processor implementing Intel 64 is the Merom version of the Core 2 processor, which was released on July 27, 2006. None of Intel's earlier notebook CPUs (Core Duo, Pentium M, Celeron M, Mobile Pentium 4) implement Intel 64."
https://en.wikipedia.org/wiki/X86-64#Intel_64
"2012: Intel themselves are limiting the functionality of the Cedar-Trail Atom CPUs to 32bit only"
https://forums.tomshardware.com/threads/no-emt64-on-intel-at...
Intel had 80% of the CPU market at the time.
kstrauser
2 days ago
Intel didn’t do that by 2005, though. MacBooks weren’t the single most popular product line, but they weren’t exactly eMachines.
FirmwareBurner
2 days ago
Macbook market share was irrelevant around 2005 for that to matter in the PC CPU statistics.
waterhouse
3 days ago
"Modern features" are one thing; "security updates" are another. According to the blog post, security updates are guaranteed for 1 year.
pigeons
3 days ago
Its an actual migration to a new platform more than just not bothering to upgrade though.
rolph
3 days ago
some people use older tech, precisely because it is physically incapable of facilitating some inpalatable tech that they dont require.
lou1306
3 days ago
Mozilla is in extremely dire straits right now, so unless this "lot of people" make a concerted donation effort to keep the lights on I would be hardly shocked by the sunsetting.
nicoburns
2 days ago
Dire straights? They had $826.6M in revenue in 2024.
They will be in dire straights if the Google money goes away for some reason, but right now they have plenty of money.
(not that I think it makes any sense for them to maintain support for 32-bit cpus)
hulitu
2 days ago
> Mozilla is in extremely dire straits right now, so unless this "lot of people" make a concerted donation effort
Last i checked, Mozilla was an ad company with Google as the main "donor".
kstrauser
3 days ago
I’d have to agree. I doubt there are that many (in relative terms) people browsing the web on 32-bit CPUs and expecting modern experiences. I’ve gotta imagine it would be pretty miserable, what with the inherent RAM limitations on those older systems, and I’m sure JavaScript engines aren’t setting speed records on Pentium 4s.
cosmic_cheese
3 days ago
Yeah consumer CPUs have been 64-bit since what, the PowerPC G5 (2003), Athlon 64 (2003), and Core 2 (2006)? There were a few 32-bit x86 CPUs that were released afterward, but they were things like Atoms which were quite weak even for the time and would be practically useless on the modern internet.
More generally I feel that Core 2 serves as a pretty good line in the sand across the board. It’s not too hard to make machines of that vintage useful, but becomes progressively challenging with anything older.
Sohcahtoa82
2 days ago
For what it's worth, people may have been running 64-bit CPUs, but many were still on 32-bit OSes. I was on 32-bit XP until I upgraded to 64-bit Win7.
cosmic_cheese
2 days ago
I have to imagine that group to be pretty small by now, though. Most PCs with specs good enough to still be useful now run W7 64-bit about as well as they do XP SP3 (an old C2D box of mine feels great under 7, as an anecdote), and for those who’ve been running Linux on these boxes there’s not really much reason to go for a 32-bit build over a 64-bit one.
kstrauser
2 days ago
I mentioned elswewhere, but Apple started selling 32 bit Core (not Core 2) MacBook Pros in 2006. Those seemed dated even at the time. I’d call them basically the last of the 32 bit premium computers from a major vendor.
Frankly, anything older than that sucks so much power per unit of work that I wouldn’t want to use them for anything other than a space heater.
selectodude
3 days ago
Intel Prescott, so like 2004.
anthk
2 days ago
32 bit Atom netbook. I use offpunk, gopher://magical.fish among tons of services (and HN work straight there) and gemini://gemi.dev over the gemini protocol with Telescope as the client. Mpv+yt-dlp+streamlink complement the video support. Miserable?
Go try browsing the web without UBlock Origin today under an i3.
Wowfunhappy
3 days ago
I haven’t tried it, but as bloated as the web is, I don’t think it’s so bad that you need gigabytes of memory or a blazing fast CPU to e.g. read a news website.
As long as you don’t open a million tabs and aren’t expecting to edit complex Figma projects, I’d expect browsing the web with a Pentium + a lightweight distro to be mostly fine.
Idk, I think this is sad. Reviving old hardware has long been one thing Linux is really great at.
doubled112
3 days ago
Try it and come back to let us know. The modern web is incredibly heavy. Videos everywhere, tons of JavaScript, etc.
My wife had an HP Stream thing with an Intel N3060 CPU and 4GB of RAM. I warned her but it was cheap enough it almost got the job done.
Gmail's web interface would take almost a minute to load. It uses about 500MB of RAM by itself running Chrome.
Does browsing the web include checking your email? Not if you need web mail, apparently.
Check out the memory usage for yourself one of these days on the things you use daily. Could you still do them?
Wowfunhappy
2 days ago
Huh. The reason I'm surprised is that I'm able to comfortably browse the web in virtual machines with only 1 cpu core and 2 gb of memory, even when the VM is simultaneously compiling software. This is only opening 1-2 tabs at a time, mind you.
zozbot234
2 days ago
A lightweight Linux desktop still works fine for very basic tasks with 4GB RAM, and that's without even setting up compressed RAM swap. The older 2GB netbooks and tablets might be at the end of the road, though.
kstrauser
2 days ago
I don’t think web browsing is a very basic task anymore, though. A substantial portion of new sites are React SPAs with a lot of client processing demands.
zozbot234
2 days ago
The other issue AIUI and a more pressing one, is that the browser itself is getting a lot heavier wrt. RAM usage, even when simply showing a blank page. That's what ends up being limiting for very-low-RAM setups.
anthk
2 days ago
Chromium has a --light switch. And, for the rest, git-clone gopher://bitreich.org/privacy-haters, look up the Unix script for Chrome variables and copy the --arguments stuff for the Chromium launcher under Wndows. Add UBock origin too.
4GB of RAM should be more than enough.
mschuster91
2 days ago
I have a ThinkPad X1 Gen3 Tablet (20KK) here for my Windows needs, my daily driver is a M2 MBA, and my work machine is a 2019 16-inch MBP (although admitted, that beast got an i9...).
Got the Thinkpad for half the ebay value on a hamfest. Made in 2018-ish, i5-8350U CPU... It's a nice thing, the form factor is awesome and so is the built-in LTE modem. The problem is, more than a dozen Chrome tabs and it slows to a crawl. Even the prior work machine, a 2015 MBP, performed better.
And yes you absolutely need a beefy CPU for a news site. Just look at Süddeutsche Zeitung, a reputable newspaper. 178 requests, 1.9 MB, 33 seconds load time. And almost all of that crap is some sort of advertising - and that despite me being an actually subscribed customer with adblock enabled on top of that.
anthk
2 days ago
- Install chromium
- Ublock origin instead of AdBlock
- git clone git://bitreich.org/privacy-haters
- either copy the chromium file to /etc/profile.d/chromium.sh under GNU/Linux/BSD and chmod +x it, or copy the --arguments array to the desktop launcher path under Windows, such as C:\foo\bar\chrome.exe" --huge-ass-list-of-arguments-copied-there.
Altough this is HN; so I would just suggest disabling JS under the UBo settings and enabling the advanced settings under Ubo. Now, click on the UBo origin and mark the 3rd party scripts and such in red; and leave out the 1st party images/requests and enabled. Then, starting accepting newspapers' domains and CDN's until it works. The CPU usage will plummet down.arp242
3 days ago
> 32-bit Linux might see more use
Probably less, not more. Many distros either stopped supporting 32bit systems, or are planning to. As the announcement says, that's why they're stopping support now.
darkmighty
3 days ago
> Maybe they could also drop support for older x86_64 CPU's, releasing more optimised builds
Question: Don't optimizers support multiple ISA versions, similar to web polyfill, and run the appropriate instructions at runtime? I suppose the runtime checks have some cost. At least I don't think I've ever run anything that errored out due to specific missing instructions.
igrunert
3 days ago
A CMPXCHG16B instruction is going to be faster than a function call; and if the function is inlined there's still binary size cost.
The last processor without the CMPXCHG16B instruction was released in 2006 so far as I can tell. Windows 8.1 64-bit had a hard requirement on the CMPXCHG16B instruction, and that was released in 2013 (and is no longer supported as of 2023). At minimum Firefox should be building with -mcx16 for the Windows builds - it's a hard requirement for the underlying operating system anyway.
enedil
3 days ago
Let me play devil's advocate: for some reason, functions such as strcpy in glibc have multiple runtime implementations and are selected by the dynamic linker at load time.
mort96
3 days ago
And there's a performance cost to that. If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster. The downside would be that my compiled program would only work on CPUs with the relevant instructions.
You could also have only one implementation of strcpy and use no exotic instructions. That would also be faster for small inputs, for the same reasons.
Having multiple implementations of strcpy selected at runtime optimizes for a combination of binary portability between different CPUs and for performance on long input, at the cost of performance for short inputs. Maybe this makes sense for strcpy, but it doesn't make sense for all functions.
duped
3 days ago
> my programs would execute faster
You can't really state this with any degree of certainty when talking about whole-program optimization and function inlining. Even with LTO today you're talking 2-3% overall improvement in execution time, without getting into the tradeoffs.
mort96
2 days ago
Typically, making it possible for the compiler to decide whether or not to inline a function is going to make code faster compared to disallowing inlining. Especially for functions like strcpy which have a fairly small function body and therefore may be good inlining targets. You're right that there could be cases where the inliner gets it wrong. Or even cases where the inliner got it right but inlining ended up shifting around some other parts of the executable which happened to cause a slow-down. But inliners are good enough that, in aggregate, they will increase performance rather than hurt it.
> Even with LTO today you're talking 2-3% overall improvement in execution time
Is this comparing inlining vs no inlining or LTO vs no LTO?
In any case, I didn't mean to imply that the difference is large. We're literally talking about a couple clock cycles at most per call to strcpy.
duped
2 days ago
What I was trying to point out is that you're essentially talking about LTO. Getting into the weeds, the compiler _can't_ optimize strcpy(*) in practice because its not going to be defined in a header-only library, it's going to be in a different translation unit that gets either dynamically or statically linked. The only way to optimize the function call is with LTO - and in practice, LTO only accounts for 2-3% of performance improvements.
And at runtime, there is no meaningful difference between strcpy being linked at runtime or ahead of time. libc symbols get loaded first by the loader and after relocation the instruction sequence is identical to the statically linked binary. There is a tiny difference in startup time but it's negligible.
Essentially the C compilation and linkage model makes it impossible for functions like strcpy to be optimized beyond the point of a function call. The compiler often has exceptions for hot stdlib functions (like memcpy, strcpy, and friends) where it will emit an optimized sequence for the target but this is the exception that proves the rule. In practice, the benefits of statically linking in dependencies (like you're talking about) does not have a meaningful performance benefit in my experience.
(*) strcpy is weird, like many libc functions its accessible via __builtin_strcpy in gcc which may (but probably won't) emit a different sequence of instructions than the call to libc. I say "probably" because there are semantics undefined by the C standard that the compiler cannot reason about but the linker must support, like preloads and injection. In these cases symbols cannot be inlined, because it would break the ability of someone to inject a replacement for the symbol at runtime.
mort96
2 days ago
> What I was trying to point out is that you're essentially talking about LTO. Getting into the weeds, the compiler _can't_ optimize strcpy(*) in practice because its not going to be defined in a header-only library, it's going to be in a different translation unit that gets either dynamically or statically linked.
Repeating the part of my post that you took issue with:
> If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster.
So no, I'm not talking about LTO. I'm talking about a hypothetical alternate reality where strcpy is in a glibc header so that the compiler can inline it.
There are reasons why strcpy can't be in a header, and the primary technical one is that glibc wants the linker to pick between many different implementations of strcpy based on processor capabilities. I'm discussing the loss of inlining as a cost of having many different implementations picked at dynamic link time.
user
3 days ago
crest
3 days ago
Afaik runtime linkers can't convert a function call into a single non-call instruction.
PhilipRoman
2 days ago
Linux kernel has an interesting optimization using the ALTERNATIVE macro, where you can directly specify one of two instructions and it will be patched at runtime depending on cpu flags. No function calls needed (although you can have a function call as one of the instructions). It's a bit more messy in userspace where you have to respect platform page flags, etc. but it should be possible.
ChocolateGod
3 days ago
They could always just make the updater/installer install a version optimized for the CPU its going to be installed on.
mort96
3 days ago
It's not that uncommon to run one system on multiple CPUs. People swap out the CPU in their desktops, people move a drive from one laptop to another, people make bootable USB sticks, people set up a system in a chroot on a host machine and then flash a target machine with the resulting image.
Loudergood
3 days ago
Detect that on launch and use the updater to reinstall.
ohdeargodno
3 days ago
Congratulations, you now need to make sure your on-launch detector is compatible with the lowest common denominator, while at the same time being able to detect modern architectures. You also now carry 10 different instances of firefox.exe to support people eventually running on Itanium, people that will open support requests and expect you to fix their abandoned platform.
For what reason, exactly ?
You want 32b x86 support: pay for it. You want <obscure architecture> support: pay for it. If you're ok with it being a fork, then maintain it.
zokier
3 days ago
There was recent story about f-droid running ancient x86-64 build servers and having issues due lacking isa extensions
https://news.ycombinator.com/item?id=44884709
but generally it is rare to see higher than x86-64-v3 as a requirement, and that works with almost all CPUs sold in the past 10+ years (Atoms being prominent exception).
wtallis
3 days ago
As far as I can tell, GCC supports compiling multiple versions of a function, but can't automatically decide which functions to do that for, or how many versions to build targeting different instruction set extensions. The programmer needs to explicitly annotate each function, meaning it's not practical to do this for anything other than obvious hot spots.
user
3 days ago
mort96
3 days ago
You can do that to some limited degree, but not really.
There are more relevant modern examples, but one example that I really think illustrates the issue well is floating point instructions. The x87 instruction set is the first set of floating point instructions for x86 processors, first introduced in the late 80s. In the late 90s/early 2000s, Intel released CPUs with the new SSE and SSE2 extensions, with a new approach to floating point (x87 was really designed for use with a separate floating point coprocessor, with a design that's unfortunate now that CPUs have native floating point support).
So modern compilers generate SSE instructions rather than the (now considered obsolete) x87 instructions when working with floating point. Trying to run a program compiled with a modern compiler on a CPU without SSE support will just crash with an illegal instruction exception.
There are two main ways we could imagine supporting x87-only CPUs while using SSE instructions on CPUs with SSE:
Every time the compiler wants to generate a floating point instruction (or sequence of floating point instructions), it could generate the x87 instruction(s), the SSE instruction(s), and a conditional branch to the right place based on SSE support. This would tank performance. Any performance saving you get from using an SSE instruction instead of an x87 instruction is probably going to be outweighed by the branch.
The other option is: you could generate one x87 version and one SSE version of every function which uses floats, and let the dynamic linker sort out function calls and pick the x87 version on old CPUs and the SSE version on new CPUs. This would more or less leave performance unaffected, but it would, in the worst case, almost double your code size (since you may end up with two versions of almost every function). And in fact, it's worse: the original SSE only supports 32-bit floats, while SSE2 supports 64-bit floats; so you want one version of every function which uses x87 for everything (for the really old CPUs), one version of every function which uses x87 for 64-bit floats and SSE for 32-bit floats, and you want one function which uses SSE and SSE2 for all floats. Oh, and SSE3 added some useful functions; so you want a fourth version of some functions where you can use instructions from SSE3, and use a slower fallback on systems without SSE3. Suddenly you're generating 4 versions of most functions. And this is only from SSE, without considering other axes along which CPUs differ.
You have to actively make a choice here about what to support. It doesn't make a sense to ship every possible permutation of every function, you'd end up with massive executables. You typically assume a baseline instruction set from some time in the past 20 years, so you're typically gonna let your compiler go wild with SSE/SSE2/SSE3/SSE4 instructions and let your program crash on the i486. For specific functions which get a particularly large speed-up from using something more exotic (say, AVX512), you can manually include one exotic version and one fallback version of that function.
But this causes the problem that most of your program is gonna get compiled against some baseline, and the more constrained that baseline is, the more CPUs you're gonna support, but the slower it's gonna run (though we're usually talking single-digit percents faster, not orders of magnitude faster).
nisegami
3 days ago
I consider it unlikely, but perhaps there's some instructions that don't have a practical polyfill for x86?
PhilipRoman
2 days ago
The only thing that comes to mind is some form of atomic instructions that need to interact with other code in well defined ways. I don't see how you could polyfill cmpxchg16b for example.
snackbroken
3 days ago
Less than 2.6% of browser users (with telemetry enabled) are using Firefox. Should the web drop support for Firefox? Seems sensible. (I would hope not)
Ukv
3 days ago
It'd be ~0.1% of Firefox users that use 32-bit Linux, extrapolating from e2le's statistics, not 2.6%. Have to draw the line at some point if an old platform is becoming increasingly difficult to maintain - websites today aren't generally still expected to work in IE6.
zamadatix
3 days ago
Firefox shouldn't need special support by the web, the same relationship can't be said of architecture specific binaries.
ohdeargodno
3 days ago
[dead]
3np
3 days ago
I believe even Raspberry Pi4B and 400 are still only having first-class drivers for 32-bit?
Kiosks and desktops and whatnot on Raspis still on 32-bit and likely to have Firefox without telemetry.
postepowanieadm
2 days ago
What's the threshold for minority to be ignored?
RainyDayTmrw
2 days ago
That's surprising. Why is there such a comparatively large number using 32-bit Firefox on 64-bit Windows?
ars
2 days ago
If they are like me, they simply never realized they needed to re-install Firefox after upgrading the OS.
Mozilla should try to automate this switch where the system is compatible to it.
bmicraft
2 days ago
Some people are under the misapprehension that 32-bit programs need less ram, which might explain that, but that's still a large number regardless.
xorcist
3 days ago
Why is it reasonable? I understand that it would be financially reasonable for a commercial endeavour, but isn't Firefox more like an open source project?
Debian supports MIPS and SPARC still. Last I checked OpenSSL is kept buildable on OpenVMS. Surely there must be a handful of people out there who cares about good old x86?
If your numbers are correct, there are millions if not tens of millions of Firefox users on 32-bit. If none of them are willing to keep Firefox buildable, there must be something more to it.
padenot
3 days ago
Debian has stopped supporting x86 32bits recently, Chrome did so 9 years or so ago.
We've carefully ran some numbers before doing this, and this affects a few hundreds to a few thousand people (hard to say, ballpark), and most of those people are on 64bits CPUs, but are using a 32bits Firefox or 32bits userspace.
The comparatively high ratio of 32bits users on Windows is not naively applicable to the Linux Desktop population, that has migrated ages ago.
xorcist
2 days ago
To expand a bit on that, the i386 support that was recently deprecated to "partially supported" in Debian refers to project status. Unsupported architecture can not be considered blockers, for example. The packages are still being built, are published on the mirrors, and will be for the forseeable future as long as enough people care to keep it alive.
That's the specific meaning of support that it was my intention to point out. Free software projects usually do not "support" software in the commercial sense, but consider platforms supported when there are enough persons to keep the build alive and up to date with the changing build requirements etc. It was my expectation that Firefox was more like free software project than a commercial product, but perhaps that is not the case?
Commercial products have to care about not spreading their resources thin, but for open source cause and effect are the other way around: The resources available is usually the incoming paramter that decides what is possible to support. Hence my surprise that not enough people are willing to support a platform that has thousands of users and isn't particularly exotic, especially compared to what mainstream distributions like Debian already build.
anthk
2 days ago
GNUinos and any Devuan based distro still supports 32 bits.
dtech
3 days ago
An open source project should also use its resources effectively. There is always the possibility for a community fork.