Bought myself an Ampere Altra system

206 pointsposted 7 months ago
by pabs3

101 Comments

a_t48

7 months ago

A few years ago I was working at a place that needed to do builds targeting the Jetson platform, and was somewhat allergic to tossing it into the cloud due to cost. We ran the numbers and the Altra paid for itself pretty quickly. Great thing, it ripped through our C++ builds + Docker image creation - I think we ended up with a 64 core version (don't remember through who, but we needed a server form factor). Ended up still moving our release builds to the cloud, due to some dicey internet situations but for local builds this thing was A+. I hope they're still using it.

dingdingdang

7 months ago

Sounds good, but also sounds odd that a dicey internet situation caused you to move to the cloud as opposed to local (I guess could be dicey uplink for remote team or similar)

wffurr

7 months ago

Sending an ssh command vs uploading an entire release binary.

a_t48

7 months ago

The robots that needed to pull the builds weren't local - they were out on farms and stuff - and even with crappy 3G internet, our upload was still the problem. Comcast was really screwing us.

fschutze

7 months ago

I realize the Ampere Altra Q features the Armv8.2-a ISA. Does anybody know if there are chips with Armv8.6-a (or above) or even SVE that one can buy? I did some research but couldn't find any.

adrian_b

7 months ago

Armv8.6-A is almost the same as Armv9.1-A, except that a few features are not mandatory.

There have been no consumer chips with Armv9.1-A, but only with Armv9.0-A and with Armv9.2-A. The only CPU with Armv8.6-A that has ever been announced publicly was the now obsolete Neoverse N2. Neoverse N2 has been skipped by Amazon and I do not know if any other major cloud vendor has used it.

So what you really search for are CPUs with Armv9.2-A (i.e. a superset of Armv8.6-A), i.e. with Cortex-A520 or Cortex-A720 or Cortex-X4 or Cortex-A725 or Cortex-X925.

There are many smartphones launched last year or this year with these CPU cores, but except for them the list of choices is short, i.e. either the very cheap Radxa Orion O6 (Cortex-A720 based), which is fine, but its software is immature, or a very expensive NVIDIA DGX development system (Cortex-X925 based; $4000 from NVIDIA or $3000 from ASUS), or one of the latest Apple computers, which support Armv8.7-A (which do not have SVE, but which have SME).

For the latest Qualcomm CPUs, I have no idea what ISA is supported by them, because Qualcomm always hides very deeply any technical information about their products.

If all you care about is the CPU, then a mid-range Android smartphone in the $400-$500 price range could be a better development system, especially if its USB Type C connector supports USB 3.0 and DisplayPort, like some Motorola Edge models, allowing you to use an external monitor and a docking station.

If you also care about testing together with some standard desktop/server peripherals, the mini-ITX motherboard of Radxa Orion O6 is more appropriate, but encountering bugs in some of its Linux device drivers is likely, which may slow down the development until they are handled.

user

7 months ago

[deleted]

nearyd

7 months ago

AmpereOne CPUs are 8.6A+ but do not have SVE2 - that is coming in our next processor, designed specifically for AI workloads, AmpereOne Aurora.

amelius

7 months ago

> And the latest one, an Apple MacBook Pro, is nice and fast but has some limits — does not support 64k page size. Which I need for my work.

I wonder where this requirement comes from ...

zozbot234

7 months ago

Asahi Linux might support 64k pages on Apple Silicon hardware. Might require patching some of the software though, if it's built assuming a default page size.

It should also be possible to patch Linux itself to support different page sizes in different processes/address spaces, which it currently doesn't. It would be quite fiddly (which is why it hasn't happened so far) but it should be technically feasible.

IIRC ARM64 hardware also has some special support (compared to x86 and x86-64) for handling multiple-page "blocks" - that kind of bridges the gap between a smaller and larger page size, opening up further scenarios for better support on the OS side.

haerwu

7 months ago

Apple Mx family of cpus supports only 4k and 16k page sizes. There is no way to run 64k binaries there without emulating whole cpu.

zozbot234

7 months ago

This is not the full story. As I mentioned in parent comment, ARM64 supports a special "contiguous" bit in its page table entries that, when set, allows a "block" of multiple contiguous pages to be cached in the TLB as a single entry. Thus, assuming 4k granularity, 64k "pages" could be implemented as contiguous "blocks" of 16 physical pages.

dist1ll

7 months ago

OP works for Red Hat, and some of the tests require booting systems with 64k pages.

What surprises me more is why Red Hat doesn't provide them with the proper hardware..

rwmj

7 months ago

Red Hat has dozens of internal aarch64 machines (similar to the one in the article) that can be reserved, but sometimes you just want a machine of your own to play with.

gbraad

7 months ago

We have access to many, such as the TestFarm, or machines you can reserve on what is called Beaker.

Note: Recently also purchased an Ampere machine with some other people. Just to play around and host stuff.

lisper

7 months ago

> some of the tests require booting systems with 64k pages

OK, but then why an 80-core CPU?

haerwu

7 months ago

@lisper because Q32 is more expensive than Q64 and I got offer for Q80 one.

Amount of options for sensible AArch64 hardware is too small (or too expensive).

lisper

7 months ago

OK, this is something I know very little about so you may have to explain it like I'm a complete noob, but I still don't understand why, if all you want to do is boot 64-bit Linux, you couldn't use, say, a Raspberry Pi 4 instead of spending thousands of zloties on an 80-core machine that requires industrial cooling.

jrockway

7 months ago

A lot of these ARM boards use custom (read: outdated) kernels and proprietary boot methods, so I'm not really sure how applicable they are to people developing Linux distributions that work everywhere. NixOS, for example, is only supporting UEFI booting on ARM64 going forward. If Redhat has the same policy, then there is only a limited set of arm64 boards available. I researched this recently as I'd like to move my k8s cluster from renting expensive cloud machines to running them on cheap machines at home, and the situation is ... difficult. (I have tested the Orange Pi 5 Max and the Radxa Rock 5B+. Both required me to hack edk2-rk3588, but they do work well now that most rk3588 support is merged in Linux 6.15/6.16-rc1. But, this is an old CPU and is just now getting mainline kernel support, and that is always how arm has felt. It is, however, kind of neat to see a "BIOS" on an ARM board. I hope it catches on.)

ot

7 months ago

I would guess to develop and test software that will ultimately run on a system with 64k page size.

amelius

7 months ago

Is there a fundamental advantage over other page sizes, other than the convenience of 64k == 2^16?

dan-robertson

7 months ago

The reason to want small pages is that the page is often the smallest unit that the operating system can work with, so bigger pages can be less efficient – you need more ram for the same number of memory mapped files, tricks like guard pages or mapping the same memory twice for a ring buffer have a bigger minimum size, etc.

The reason to want pages of exactly 4k is that software is often tuned for this and may even require this from not being programmed in a sufficiently hardware agnostic way (similar to why running lots of software on big median systems can be hard).

The reasons to want bigger pages are:

- there is more OS overhead tracking tiny pages

- as well as caches for memory, CPUs have caches for the mapping between virtual memory and physical memory, and this mapping is page-size granularity. These caches are very small (as they have to be extremely fast) so bigger pages means memory accesses are more likely to go to pages in the cache, which means faster memory accesses.

- CPU caches are addressed based on the index into the minimum page size so the max size of a cache is page-size * associativity. I think it can be harder to increase the latter than the former so bigger pages could allow for bigger caches, which can make some software perform better.

These things you see in practice are:

- x86 supports 2MB and 2GB pages, as well as 4KB pages. Linux can either directly give you pages in this larger size (a fixed number are allocated at startup by the OS) or there is a feature called ‘transparent hugepages’ where sufficiently aligned contiguous smaller pages can be merged. This mostly helps with the first two problems

- I think the Apple M-series chips have an 8k minimum page size, which might help with the third problem but I don’t really know about them

p_ing

7 months ago

I believe this is true for x86 as a whole, but on NT any large page must be mapped with a single protection applied to the entire page, so if the page contains read-only code and read-write data, the entire page must be marked read-write.

raverbashing

7 months ago

Yes there are

(as a starting point 4k is a "page size for ants" in 2025 - 4MB might be too much however)

But the bigger the page the less TLB entries you need, and less entries in your OS data structures managing memory, etc

fc417fc802

7 months ago

4K seems appropriate for embedded applications. Meanwhile 4M seems like it would be plenty small for my desktop. Nearly every process is currently using more than that. Even the lightest is still coming in at a bit over 1M

p_ing

7 months ago

1M is a huge waste of memory.

Imagine writing out a one sentence note in notepad and the resulting file being 1M on disk.

fc417fc802

7 months ago

Yet when I reference the running processes on my desktop something like 90% of them have more than 16M resident. So it doesn't appear that even an 8M page size would waste much on a modern desktop during typical usage.

If I'm mistaken about some low level detail I'd be interested to learn more.

ch_123

7 months ago

64k is the largest page size that the ARM architecture supports. The large page size provides advantages for applications which allocate large amounts of memory.

maz1b

7 months ago

I've always wondered why there isn't a bigger market / offering for dedicated servers with Ampere at their heart (apart from Hetzner)

If anyone knows of any, let me know!

ozgrakkurt

7 months ago

A lot of software is built and optimized for x86 and EPYC processors are really good so it is hard to get into arm, don’t think that many companies use it.

adev_

7 months ago

> A lot of software is built and optimized for x86 and EPYC processors are really good so it is hard to get into arm, don’t think that many companies use it.

That is just not true.

Nowadays, most OSS software and most server side software will run without any hinch on armv8.

A tremendous amount of work has been done to speed up common software on armv8, partially due to popularity of mobile as a platform but also and to the emergence of ARM servers (Graviton / Neoverses) in the major Cloud providers infrastructure.

p_l

7 months ago

However, it's hard to get into ARM other than using cloud offerings.

Because those cloud offerings have handled for you the problematic case of ARM generally operating as "closed platform" even when everything is open source.

On a PC server, usually you only hit any issues if you want to play with something more exotic on either software or hardware. Bog-standard linux setup is trivial to integrate.

On ARM, even though finally there's UEFI available, I recall that even few years ago there were issues with things like interrupt controller support - and that kind of reputation persists and definitely makes it harder to percolate on-prem ARM.

It also does not help that you need to go for pretty pricy systems to avoid vendor lock-in at firmware compatibility level - or had to, until quite recently.

rixed

7 months ago

Why is it hard to get a mac or a pi?

p_l

7 months ago

Pi is relatively underpowered (quite underpowered, even), has proprietary boot system, and similarly isn't exactly good with things you might want in professional server (there are some boards using compute modules that provide it as add on, but it's generally not a given). Also, I/O starved.

Mac is similarly an issue of proprietary system with no BMC support. Running one in a rack is always going to be at least partially half-baked solution. Additionally, you're heavily limited in OS support (for all that I love what Asahi has done, it does not mean you can install let's say RHEL on it, even in virtual machine - because M-series chips do not support 64kB page size which became the standard on ARM64 installs in the cloud, for example RHEL defaults to it and it was quite a pain to deal with in a company using Macbooks).

So you end up "shopping" for something that actually matches server hardware and it gets expensive and sometimes non-trivial, because ARM server market was (probably still is) not quite friendly to casually buying a rackmount server with ARM CPUs for affordable prices. Hyperscalers have completely different setups where they can easily tank the complexity costs because they can bother with customized hardware all the way to custom ASICs that provide management, I/O, and hypervisor boot and control path (like AWS Nitro).

One option is to find a VAR that actually sells ARM servers and not just appliances that happen to use ARM inside, but that's honestly a level of complexity (and pricing) above what many smaller companies want.

So if you're on a budget it's either cloud(-ish) solutions or maybe one your engineers can be spared to spend considerable amount of time to build a server from parts that will resemble something production quality.

rixed

7 months ago

PI is not that much unpowered by the dollar, is it?

I think Google demonstrated 20 years ago that server-grade hardware is no match for fault tolerance in software. Plenty of build farms use PIs, running standard flawless arm64 Linux distros.

p_l

7 months ago

RPi does not support everything you may need to use, build farms are not the only use case for ARM servers too.

And Google could do what they did because of the scale they bought crap changed the calculus for performance.

And even Google switched to denser compute over time, with custom hw even.

haerwu

7 months ago

Apple Mac you can buy now will have M3 or M4 cpu. While Asahi team supports only M1 and M2 families.

So you cannot run Linux natively on currently-in-store Mac hardware.

And raspberry/pi is a toy. Without any good support in mainline Linux.

delfinom

7 months ago

Have you ever tried to run a Mac "professionally" as a role of a server?

It's absolute garbage. You can't even run daemons in the last few years on the mac without having an user actually log into the mac on boot. And that's just the beginning of how bad they are.

And don't get me wrong, I'm not shitting on Macs here, but Apple does not intend for them to be used as servers in the slightest.

arp242

7 months ago

Most of the times I've looked at it, ARM still seems to be lagging behind x86 a bit, but the gap is much smaller than it used to be. For example: https://news.ycombinator.com/item?id=41925983

Of course the specifics will depend on what you're doing: for certain applications or code-paths ARM may well have 100% parity, or perhaps even have more optimisations than x86.

Someone

7 months ago

If you use AWS, lots of software can easily be run on Graviton, and lots of companies do that.

https://www.theregister.com/2023/08/08/amazon_arm_servers/:

“Bernstein's report estimates that Graviton represented about 20 percent of AWS CPU instances by mid-2022“

And that’s three years ago. Graviton instances are cheaper than (more or less) equivalent x86 ones on AWS, so I think it’s a safe bet that number has gone up since.

baq

7 months ago

yeah if you're running a node backend, the changes are cosmetic at best (unless you're running chrome to generate pdfs or whatever). easiest 20% saved ever. if I were Intel or AMD I would be very afraid of this... years ago.

imtringued

7 months ago

I was scared of ARM taking over in 2017 (e.g. Windows being locked down to just the Microsoft store) and 8 years later literally nothing happened.

yjftsjthsd-h

7 months ago

I would not call Windows RT "literally nothing". It failed, but it clearly was an attempt to lock things down.

p_ing

7 months ago

Windows S exists today.

zxexz

7 months ago

I don’t think a lot of companies realize they are using it. At three companies now, I’ve witnessed core microservices migrate to ARM seamlessly, due to engineering being direct pressure to “reduce cloud spend”. The terrifying (and amazing) bit is that moving to ARM was enough to get finance off engineering’s back in all cases.

M0r13n

7 months ago

I am running an ARM64 build of Ubuntu on my MacBook Air using Multipass. I've never had a problem due to missing support/optimisation for ARM - at least I didn't notice any. I even noticed that build times were faster on this virtualised machine than they were natively on my previous Tuxedo laptop which had an Intel i7 that was a couple of years old. Although, I blame this speed mostly on the sheer horsepower of the newest Apple chips

user

7 months ago

[deleted]

moffkalast

7 months ago

They're slow and the arch is less compatible? Arm cores in web hosting are typically known as the shit-tier.

I think the main use case for these is some sort of Android build farm, as a CI/CD pipeline with testing of different OS versions and general app building, since they don't have to emulate arm.

dijit

7 months ago

Well, I've run some Ampere Altra ARM machines in my studio so I can speak to this;

A) No you can't use ARM as android build farms, as androids build tools only work on x86 (go figure).

B) Ampere Altra runs faster for throughput than x86 on the same lithography and clock frequency; I can't imagine how they'd be slower for web, it's not my experience with these machines under test. Maybe virtualisation has issues (I ran bare-metal containers - as you should).

My original intent was to use these machines as build/test clusters for our go microservices (and I'd run ARM on GCP) but GCP was a bit too slow to roll out and now we're far into feature locking any migrations of that.

So I added the machines to the general pool of compute and they run bots, internal webservices etc; with Kubernetes.

The performance is extremely good, only limited by the fact we can't use them as build machines for the game due to the architecture difference - however for storage or heavy compute they really outperform the EPYC Milan machines which are also on a 7nm lithography.

zozbot234

7 months ago

> No you can't use ARM as android build farms, as androids build tools only work on x86 (go figure).

Does qemu-user solve that, or are there special requirements due to JIT and the like that qemu-user can't support?

dijit

7 months ago

I'm not sure I'm comfortable buying hardware for build systems that has to emulate an instruction set to build.

Doubly-so when that hardware has a native instruction set originally to the target.

bayindirh

7 months ago

I think the best thing happened to that system is having an Arctic Cooling device on board. These things are as reliable as it gets.

None of the Arctic Cooling fans I had ever failed or lost performance over the years. Even their first generation desktop fan (Breeze) which runs multiple 8-12 hour shifts with me for the last decade shows any age.

fschutze

7 months ago

I never bought used computer parts. Are these parts generally reliable for ~2 years when bought used?

throw-qqqqq

7 months ago

I haven’t bought new hardware since I was a teenager. Second hand is cheap and good for the environment. I never received a broken part and everything has worked reliably for me.

2-3 years is not a lot. My daily driver laptop is from 2011 and still going strong.

Sure, there are “lemons” out there, but there are also a lot of people who just replace their hardware often.

nisa

7 months ago

I concur. Doing this for almost all my technical equipment and mobile phones and never had a problem. For important/expensive things you can buy on refurbished stores that offer a 1-year warranty in EU.

amelius

7 months ago

Are you still using the same battery?

throw-qqqqq

7 months ago

Haha yes, but it doesn’t last for more than 20-30mins now. Used to be 7-8hours for the first five-six years, then dropped off.

I also only buy used phones (I don’t have high requirements) and as with laptops, batteries are the “weak link” - as you correctly point out.

A brand new battery for my laptop, can be had for ~30-65 USD though, and the battery is easy to replace (doesn’t even require screwdriver). I never use it untethered anymore though, so I don’t bother..

amelius

7 months ago

Ha, ok. I sometimes read that old batteries pose a physical risk, but I thankfully haven't experienced that. Maybe something to keep in mind though.

I'd like to see some numbers on it.

throw-qqqqq

7 months ago

For me they usually just lose capacity and/or ability to charge. Most laptops will keep functioning when tethered.

I think most batteries must puncture or corrode to pose a physical hazard. Alkaline batteries can corrode, but I’ve never seen issues with old Li-Ion unless exposed to violence and/or water.

EDIT:

Some numbers found quickly: https://www.britsafe.org/safety-management/2024/lithium-ion-...

magicalhippo

7 months ago

The main failure points in electronics are by far power supply and batteries.

Non-polymer electrolytic capacitors can dry out, but just about all decent modern motherboards use polymer-based since years ago.

My current NAS is my previous desktop, which I bought in 2015. I tended to keep my desktop on 24/7 due to services, and my NAS as well, so it's been running more or less continuously since then. It's on its second PSU but apart from that chugging along.

I've been using older computer parts like this for a long time, and reliability increased markedly after they switched to non-polymer caps.

Modern higher-end GPUs due to their immense power requirements can have components fail, typically in the voltage regulation. Often this can be fixed relatively cheaply.

If buying a desktop I'd check that it works, it looks good inside (no dust bunnies etc), seller seems legit, and I'd throw a new PSU in there once I bought it.

pabs3

7 months ago

My current computer is from more than 10 years ago, and I found it in a dumpster. Works fine.

codr7

7 months ago

You're not going to leave us hanging with the specs, right?

pabs3

7 months ago

Bog-standard Asus mobo with Intel i5 CPU, one USB3 port, some extra RAM begged from other folks and from previous desktops also from dumpsters, SATA 6GB ports, and a new at the time SATA SSD. First UEFI computer I've had. Found in a dumpster a year or two ago.

cornichon622

7 months ago

Built a gaming desktop for a friend almost 2 years ago; used GPU and CPU (maybe a few others things too), everything's going great. It helps that our local Cragslist offers efficient buyer protection.

Server-side, I also bought used Xeons for an old box and recertified 10TB Exos. No issues there neither.

The HDDs are a bit of a gamble, but for anything else I can only encourage you to buy used!

throwaway2037

7 months ago

    > It helps that our local Cragslist offers efficient buyer protection.
What does this mean?

cornichon622

7 months ago

When you buy something on Le Bon Coin, if it's not conform to the description or outright broken, you can very easily get a refund from the platform itself up to 3 days after reception, even if the seller disappeared. I think it encourages people to buy used stuff by removing the main drawback which is "it may not work properly anymore".

avhception

7 months ago

I regularly buy used hardware. It fails when it fails, same as the new stuff. Is there a higher probability? Possibly, but at the small sample sizes I'm at I can't feel the difference. Feels random either way.

ekianjo

7 months ago

Used professional hardware (servers, workstations) are made with higher quality standards so they last fairly long.

Cthulhu_

7 months ago

Plus if they're from a data center, they will have been in a cooled, filtered, and stable space for their lifetime, vs a desktop that may have been in a dusty room getting moved or kicked from time to time.

tasuki

7 months ago

Also they're good for heating your home.

NexRebular

7 months ago

Unless you are running SUN CoolThreads(tm) servers!

lproven

7 months ago

> I never bought used computer parts.

I was taken aback by this.

I almost never buy new parts, except phones, and not always then.

I don't think I've bought a new computer since about 2001 or 2002, and then, that was because someone else was paying and her stipulation was new only. Before then... the 1980s?

Computer hardware is like a car: when you exit the shop, 25% of the value just dissipates like a puff of steam. Within about 3 years, another 50-60% is gone. So, I always try to buy kit that's more than about 3Y old, because that's when it becomes cost-effective.

When you pay 10% of the new price, that means you are getting at least 10x the price:performance ratio. It's almost impossible to buy anything new that is 10x faster than something ~3 years old and it has been for 20-25 years or more now.

theandrewbailey

7 months ago

I work at an e-waste recycling company. People throwing out old but still working servers, desktops, and laptops is pretty common. Companies regularly decommission and throw out their IT assets after some number of years, even if the stuff still works (which most of it still does).

mrheosuper

7 months ago

those are server-grade stuff, it's normal for them to work 10 years continuously.

haerwu

7 months ago

OK, I tend to ignore HN but got link to this post from several people so will go and comment.

user

7 months ago

[deleted]

dwaaa

7 months ago

[flagged]

burnt-resistor

7 months ago

1341 PLN / 371 USD isn't "cheap" for 25% more cores. That's almost double the price.

Q64-22 on eBay (US) for $150-200 USD / 542-723 PLN.

https://www.ebay.com/itm/365380821650

https://www.ebay.com/itm/365572689742

Aissen

7 months ago

25% more cores and 36% more clock. It amounts to paying 85% more for 70% more perf. Not too bad.

burnt-resistor

7 months ago

GHz/MHz wars and ultra-deep pipelines lead to long pipeline stalls and low efficiency. Clock != (IPS) cycle efficiency. What matters is measured performance. There's little point in buying the most expensive option, it's usually throwing money away.

Sometimes though, option are limited but there are also traditional and alternative channel vendors besides secondary markets. For example, a vendor or the mfgr might be willing to sample a part.

Aissen

7 months ago

When comparing the same design, it absolutely makes sense, because it usually scales. Of course in this case, there's the SRAM which is shared, memory bandwidth, etc. As a rule of thumb: run your own benchmark!

szszrk

7 months ago

He clearly refers to that and states they did not respond.

Also, CPU was hardly the biggest cost here.

haerwu

7 months ago

Those auctions are where we looked at. No answer from seller - probably they did not wanted to deal with sending packages outside of USA.

timzaman

7 months ago

Offtopic, I'm so confused why this is top1 on my HN? Just a pretty normal build?

eqvinox

7 months ago

It's not "your" HN, HN doesn't do algorithmic/per-user ranking. (Ed.: Actually a refreshing breath of wide social cohesion on a platform, IMHO. We have enough platforms that create bubbles for you.)

It's top1 on everyone's HN because a sufficient number of people (including myself) thought it a nice writeup about fat ARM systems.

baq

7 months ago

I haven’t been following hardware for a while, granted, but this is the first time I see a desktop build with an arm64 cpu. Didn’t know you can just… buy one.

avhception

7 months ago

For what it's worth, I've been using a Lenovo X13s for some 3 months now. It's not a desktop, and it took years for core components to be supported in mainline Linux, but I do use it as a daily driver now. The only thing that's still not working is the webcam.

szszrk

7 months ago

Normal ARM64 80 core system with $1000 EATX motherboard? How is this typical?

tinix

7 months ago

EATX is a pretty standard server motherboard form factor.

It's not even a multiple CPU board...

This is indeed a pretty standard (and weak) ARM server build.

You can get the same CPU M128-30 with 128 3ghz cores for under $800 USD.

You can throw two into a Gigabyte MP72-HB0 and fit it into a full tower case easily.

That'd only cost like $3,200 USD for 256 cores.

RAM is cheap, and that board could take 16 DIMMs.

If you used 16 GB DIMM like OP that's only 256 GB of RAM, in a server, it is not that much... only one gig per core... for like $500 USD.

Maybe for a personal build this seems extravagant but it's nothing special for a server.

haerwu

7 months ago

Depends on how you look at it.

Would you call Threadripper system "a normal build"? For many people they are normal builds because they need more computing power or more PCIe lanes than "normal user" desktop has.

On the other side you have those who pretend to use raspberry/pi 3 as "an Arm desktop" despite only 1GB of ram and 4 sluggish cores.