Intel's E2200 "Mount Morgan" IPU at Hot Chips 2025

76 pointsposted 14 hours ago
by ingve

29 Comments

mappu

13 hours ago

This is Intel making a 24 core Neoverse N2 server on TSMC - not their ISA, not their core design, and not their fab

pclmulqdq

10 hours ago

This isn't really a server. This is a NIC with some small cores to help handle management functions. The server you plug it into will have hundreds of x86 cores.

Palomides

12 hours ago

the arm cores are absolutely the least interesting part of this thing, does it matter much if they're outsourced?

wmf

12 hours ago

Barefoot was always on TSMC so why change now.

matt-p

13 hours ago

Yep, it's only recently they've even properly started cranking out 10nm themselves. Pretty embarrassing. I wonder what future we have if everyone is just sat ontop of TSMC, not great.

wtallis

12 hours ago

You must be using odd definitions for "properly" and "recently". Intel started volume shipments of 10nm-family parts for laptops in 2019, servers in 2021, and desktops in 2022. They've since moved most of their products off of the 10nm family and onto EUV-based processes: two generations of laptop parts, one generation of desktop parts, and the CPU chiplets of last year's server parts (which still use "Intel 7" for the IO chiplets).

Additionally, the second and third round of desktop parts released on 10nm (aka "Intel 7") are now known to have pushed clocks and voltages somewhat beyond the limits of the process, leading to embarrassing reliability problems and microcode updates that hurt performance. Intel has squeezed everything they can out of their 10nm and have mostly put it behind them, so talking about it like they only recently ramped production is totally wrong about where they are in the lifecycle.

aseipp

12 hours ago

What? Intel has been doing large scale production runs of their 10nm node for years now. If you're talking about Raptor Lake failures, that was one generation of products on that note, there has also never been any indication AFAIK that e.g. Emerald Rapids suffered the same oxidization/voltage failures the consumer line did despite being on the same process node. They're already moving on from all this, really.

SecretDreams

12 hours ago

This is some quite outdated/interesting hot takes.

colechristensen

13 hours ago

Missteps happen but I have a feeling Intel's fab is going to be forced to be near the leading edge one way or another. The US government has plenty of levers to pull to manipulate the global semiconductor market.

trebligdivad

13 hours ago

The ability to connect to 4 hosts makes it seem like MRIOV all over again! Still, it does look like a fun device from the 'big arm chip with lots of connectivity' side

jeffbee

13 hours ago

It's quite interesting. Basically Nitro on a stick. For the "repatriation" crowd this seems appealing. But would you invest in the software necessary to exploit this, knowing that Intel could lose interest or just go bankrupt with little warning?

pwarner

10 hours ago

Presumably all hyperscalers who aren't Amazon could be a customer for this? One of them might be enough to keep it viable. See sibling comment on b Google being a customer for presumably the previous generation.

wmf

12 hours ago

I wouldn't be surprised if Google buys the IP since they're the only customer.

pyvpx

11 hours ago

How, though? Does the TPU team (literally or logically) map to owning IPU h/w successfully?

(I miss having these kinds of convos on twitter as networkservice ;)

pclmulqdq

10 hours ago

There's a lot more silicon at Google aside from the TPU team, including their own previous NICs.

pwarner

10 hours ago

I believe they have other custom silicon beyond TPUs so it wouldn't be crazy to take this in house if Intel really cans it.

lenerdenator

12 hours ago

I think at this point, it's clear that the US government will not let Intel go bankrupt without a serious effort to put the company in healthy financial standing first.

Whether or not that's a good thing, well, people have their opinions, but they're considered a national security necessity.

jiggawatts

11 hours ago

That begs the question: how would one go about utilising this thing in their own deployment?

redok

an hour ago

The primary customer for this would be infrastructure providers that want to give the host full control of the hardware (bare metal, no hypervisor) while still maintaining control of the IO (network attached storage and network isolation).

Conventionally this is done in software with a hypervisor which emulates network devices for VMs (virio/vmxnet3, etc...) and does some sort of network encapsulation (vlan, vxlan, etc...). Similar things are done for virtual block storage (virtio blk, nvme, etc..) to attach to remote drives.

If the IaaS clients are high bandwidth or running their own virtualization stack, the infrastructure provider has nowhere to put this software. You can do the infrastructure network and storage isolation on the network switches with extra work but then the termination of the networking and storage has to be done in cooperation with the clients (and you can't trust them to do it right).

Here, the host just sees PCI attached network interfaces and directly attached NVMe devices which pop up as defined by the infrastructure. These cards are the compromise where you let everyone have baremetal but keep your software defined network and storage. In advanced cases you could even dynamically traffic shape bandwidth between network and storage prioritization.

pwarner

10 hours ago

Presumably first hire a few developers to program it.

YesThatTom2

12 hours ago

I hope their Linux code isn’t as out-dated and buggy as their IPMI system.