News: Arm announces next Generation core family called Arm Lumex

36 pointsposted 19 hours ago
by HeyMeco

16 Comments

HeyMeco

19 hours ago

Delivering double digit IPC improvements (looks like the industry is still competitive). > The Arm C1 Ultra CPU aims for +25% single-threaded performance and double-digit IPC gains

The new Mali GPU's look not bad too with +20% performance while 9% more power efficient.

And SME2-enabled Armv9.3 cores for on device AI doesn’t sound bad either

znpy

16 hours ago

this is going to be a killer in the cloud, when it lands in graviton cpus

adrian_b

10 hours ago

It is not clear on which core the successor of Neoverse V3 (the server version of Cortex-X4, which is used in the latest Graviton) will be based.

Arm C1-Ultra is the successor of Cortex-X925. C1-Ultra has great improvements in single-thread performance, but Cortex-X925 had very poor performance per die area, which made it totally unsuitable for server CPUs. Arm has not said anything about the performance per area of C1-Ultra, so I assume that it continues to be poor.

Arm C1-Pro is the successor of Cortex-A725. Arm has made server versions of the Cortex-A7xx, but Amazon did not like them for Gravitons, for being too weak.

Therefore only Arm C1-Premium could have a server derivative that would become the successor of Neoverse V3 for a future Graviton.

For now, the technical manual of C1-Premium is very sparse. Only when the optimization guide for C1-Premium will be published, showing its microarchitecture, we will know whether it is a worthy replacement for Cortex-X4/Neoverse V3, which had the best performance per die area among the previous Arm CPU cores.

rickdeckard

19 hours ago

Good.

Curious to see how much of this new arch will actually be adopted by Qualcomm, or whether they will diverge further with their (Nuvia-acquired) Architecture.

Either way, I hope the result is not causing fragmentation in the market (e.g. developers not making use of next-gen ARM features because Qualcomm doesn't support them)

dogma1138

17 hours ago

Given the litigation I don’t see Qualcomm adopting any new cores whilst keeping on with developing theirs it’s going to be too risky as regardless of how many firewalls they put in place ARM could claim that their IP spilled over.

rickdeckard

17 hours ago

My last status is that ARM backed down from invalidating ARMs ALA license earlier this year, so Qualcomm still has an architecture license to integrate ARMs designs into their own custom cores.

Am I missing something...?

daft_pink

10 hours ago

here’s hoping Nvidia gets the same treatment as intel from arm.

M95D

17 hours ago

I can only reach the "meh" level of enthusiasm. RK3588 was released in 2022 and AFAIK it still doesn't have video decoding acceleration in mainline kernel/mesa/ffmpeg.

jauntywundrkind

8 hours ago

I generally find ARMs non-delivery and then lack of drivers to be super grating as well. That said, I believe this video package is non-arm, is 3rd party.

Maybe also worth mentioning that the rk3588 uses Cortex A76 cores, which arm announced in 2018, so this was a 4 year old design at time of release. At this pace it seems to take the better part of a decade to get an arm core out & generally usable.

I really really hope some of this video encoding work helps lay some foundation for further mainline vpus to be easier. I bought a cute small rk3566 board hoping to make a cheap low power wifi video transmitter, and of course it requires a truly prehistoric vendor provided kernel to take advantage of the vpu, alas. Scant hope for this ever improving but maybe some decade drivers won't be a scythian nightmare.

Its nice seeing a second player come to the GPU/video space at least. Imagination GPU's are in the new Pixel phone! And a bunch of various designs here & there. Maybe they can get religion & work a little harder than others have at up streaming. There were some promising early mainlineings, but I've not seeing much in kernelnewbies release logs for a while now: troubling silence.

avhception

17 hours ago

What exactly is this on-device AI stuff that everybody is talking about? I'm a mere Sysadmin, so probably I'm missing something here.

The last time I tried to run local LLMs via my 7900XT using LMStudio, even with 20gb of VRAM, they were borderline usable. Fast enough, but quality of the answers and generated code was complete and utter crap. Not even in the same ballpark as ClaudeCode or GPT4/5. I'd love to run some kind of supercharged commandline-completion on there, though.

Edit: I guess my question is: What exactly justifies the extra transistors that ARM here and also AMD with their "AI MAX" keep stuffing onto their chips?

theuppermiddle

16 hours ago

I guess AI is not just LLM. Image processing, speech to text etc would fall under the use case. Regarding GenAI, Pixel phones already run nano model on the phone with decent performance and utility.

HeyMeco

16 hours ago

Think of the photos app on your phone and it’s "intelligent" search bar

untrimmed

17 hours ago

this feels more like Arm giving its partners the homework to catch up with Apple, rather than a true innovation leap. Apple integrates hardware and software seamlessly. This just provides the raw ingredients.

0points

16 hours ago

> This just provides the raw ingredients.

Arm doesn't build operating systems. But you already knew that. So your post is merely troll bait.