viccis
4 days ago
>Fortunately, those extra PCIe lanes tend to get repurposed as additional M.2 holes.
Or unfortunately, for the unlucky people who didn't do their research, so now their extra M.2 drives are sucking up some of their GPU's PCIe bus.
Numerlor
4 days ago
The vast majority of people run just one gpu, which motherboards have a dedicated direct to CPU x16 slot for. Stealing lanes comes into play with chipset connected slots
zten
4 days ago
I bought a Gigabyte X870E board with 3 PCIe slots (PCIe5 16x, PCIe4 4x, PCIe3 4x) and 4 M.2 slots (3x PCIe5, 1x PCIe 4). Three of the M.2 slots are connected to the CPU, and one is connected to the chipset. Using the 2nd and 3rd M.2 CPU-connected slots causes the board to bifurcate the lanes assigned to the GPU's PCIe slot, so you get 8x GPU, 4x M.2, 4x M.2.
I wish you didn't have to buy Xeon or Threadripper to get considerably more PCIe lanes, but for most people I suspect this split is acceptable. The penalty for gaming going from 16x to 8x is pretty small.
ciupicri
4 days ago
For a moment I didn't believe you, then I looked at the X870E AORUS PRO ICE (rev. 1.1) motherboard [1] and found this:
> 1x PCI Express x16 slot (PCIEX16), integrated in the CPU:
> AMD Ryzen™ 9000/7000 Series Processors support PCIe 5.0 x16 mode
> * The M2B_CPU and M2C_CPU connectors share bandwidth with the PCIEX16 slot.
> When theM2B_CPU orM2C_CPU connector is populated, the PCIEX16 slot operates at up to x8 mode.
[1]: https://www.gigabyte.com/Motherboard/X870E-AORUS-PRO-ICE-rev...
wtallis
4 days ago
IIRC, X870 boards are required to spend some of their PCIe lanes on providing USB4/Thunderbolt ports. If you don't want those, you can get an X670 board that uses the same chipset silicon but provides a better allocation of PCIe lanes to internal M.2 and PCIe slots.
elevation
4 days ago
Even with a Threadripper you're at the mercy of the motherboard design.
I use ROG board that has 4 PCIe slots. While each can physically seat an x16 card, only one of them has 16 lanes -- the rest are x4. I had to demote my GPU to a slower slot in order to get full throughput from my 100GbE card. All this despite having a CPU with 64 lanes available.
grw_
4 days ago
I don't think Threadripper platform is to blame that you bought a board with potentially the worst possible pcie lane routing. Latest generation has 88 usable lanes at minimum, most boards have 4x 16x, and Pro supports 7x Gen 5.0 x16 links, an absolutely insane amount of IO. "At the mercy of motherboard design"- do the absolute minimum amount of research and pick any other board?
nrdvana
4 days ago
You're using 100GbE ... in an end-user PC? What would you even saturate that with?
aaronmdjones
4 days ago
I wouldn't think it's about saturating it during normal use; rather, simply exceeding 40 Gbit/s, which is very possible with solid-state NASes.
Dylan16807
4 days ago
Okay, but then I need to ask what kind of use case doesn't mind the extra latency from ethernet but does care about the difference between 40Gbps and 70Gbps.
kimixa
4 days ago
Though for the most the performance cost of going down to 8x PCIe is often pretty tiny - only a couple of percent at most
[0] shows a pretty "worst case" impact of 1-4% - that's on the absolute highest-end card possible (a geforce 5090) and pushing it down to 16x PCIe3.0. A lower end card would likely show an even smaller difference. They even showed zero impact from 16xPCIe4.0, which is the same bandwidth as 8x of the PCIe5.0 lanes supported on X870E boards like you mentioned.
Though if you're not on a gaming use case and know you're already PCIe limited it could be larger - but people who have that sort of use case likely already know what to look for, and have systems tuned to that use case more than "generic consumer gamer board"
[0] https://gamersnexus.net/gpus/nvidia-rtx-5090-pcie-50-vs-40-v...
dur-randir
4 days ago
>I wish you didn't have to buy Xeon
But that's the whole point of Intel's market segmentation strategy - otherwise their low-tier workstation Xeons would see no market.
vladvasiliu
4 days ago
I wonder how this works. I'm typing this on a machine running an i7-6700K, which, according to Intel, only has 16 lanes total.
It has a 4x SSD and a 16x GPU. Their respective tools report them as using all the lanes, which is clearly impossible if I'm to believe Intel's specs.
Could this bifurcation be dynamic, and activate those lanes which are required at a given time?
toast0
4 days ago
For Skylake, Intel ran 16 lanes of pci-e to the CPU, and ran DMI to the chipset, which had pci-e lanes behind it. Depending on the chipset, there would be anywhere from 6 lanes at pci-e 2.0 to 20 lanes at pci-e 3.0. My wild guess is that a board from back then would have put m.2 behind the chipset and no cpu attached ssd for you; that fits with your report of the GPU having all 16 lanes.
But, if you had the nicer chipsets, wikipedia says your board could split the 16 cpu lanes into two x8 slots or one x8 and 2 x4 slots, which would fit. This would usually be dynamic at boot time, not at runtime; the firmware would typically look if anything is in the x4 slots and if so, set bifurcation, otherwise the x16 gets all the lanes. Some motherboards do have PCI-e switches to use the bandwidth more flexibly, but those got really expensive; i think at the transition to pci-e 4.0, but maybe 3.0?
vladvasiliu
4 days ago
Indeed. I dug out the manual (MSI H170 Gaming M3), which has a block diagram showing the M2 port behind the chipset, which is connected via DMI 3 to the CPU. In my mind, the chipset was connected via actual PCIe, but apparently, it's counted separately from the "actual" PCIe lanes.
wtallis
4 days ago
Intel's DMI connection between the CPU and the chipset is little more than another PCIe x4 link. For consumer CPUs, they don't usually include it in the total lane count, but they have sometimes done so for Xeon parts based off the consumer silicon, giving the false impression that those Xeons have more PCIe lanes.
doubled112
4 days ago
The real PITA is when adding the NVMe disables the SATA ports you planned to use.
creatonez
4 days ago
Doesn't this usually only happen when you put an M.2 SATA drive in? I've never seen a motherboard manual have this caveat for actual NVMe M.2 drives. And encountering an M.2 SATA drive is quite rare.
ZekeSulastin
4 days ago
I have a spare-parts NAS on a Z170 (Intel 6k/7k) motherboard with 8 SATA ports and 2 NVME slots - if I put an x2 SSD in the top slot it would disable two ports, and if it was an x4 it would disable four! Luckily the bottom m2 slot doesn’t conflict with any SATA ports, just an expansion card slot. (The board supports SATA Express even - did anything actually use that?)
SATA ports are far scarcer these days though and there’s more PCIE bandwidth available anyways, so it’s not surprising that there aren’t conflicts as often anymore.
toast0
4 days ago
Nope, for AM5, both of the available chipsets[1] have 4 serial ports that can be configured as x4 pci-e 3.0, 4x sata, or two and two. I think Intel does similar, but I haven't really kept up.
[1] A620 is cut down, but everything else is actually the same chip (or two)
viccis
4 days ago
As some others have pointed out, there are some motherboards in which, if you use M.2 cards on the wrong slot, it will turn your 16x GPU slot into 8x.
carlhjerpe
4 days ago
Luckily unless you're doing something odd with your GPU it isn't using the bandwidth and won't lose significant performance either way.
Steve from gamers nexus tests new GPUs on older PCIe gens and the difference is negligible. And since PCIe always doubled bandwidth with generations it's effectively the same as running on half bus speed.
I run an Intel A380 for Linux and a NVIDIA 3060 for a Windows VM (I'm a bit cheap). I opted for using some Intel SATA6 datacenter drives we decommissioned from work over using more PCIe for storage, but the performance is outstanding.
Modern game engines don't need all those gigabytes per second. If you're doing AI maybe it matters, but then you probably hopefully maybe won't cheap out on consumer CPUs with 20 PCIe lanes either.
wtallis
4 days ago
Low-end GPUs often only have eight PCIe lanes to begin with, sometimes because they're using chips that were designed more for laptop use than for desktop cards. Intel Arc A380 and B580, AMD 7600XT, NVIDIA 4060Ti and 5060Ti are all eight-lane cards.
adgjlsfhk1
3 days ago
I do really think we're due for an expansion refactor. 75 Watts is an awkward amount of power and PCBs are worse than wires for data throughout and signal integrity. It feels like GPUs would be much happier if we just had a cable to plug into the motherboard that transferred data and no power and which didn't force the GPU to hang off the side of the motherboard and break during shipping
throwaway48476
4 days ago
New chipsets have become PCIe switches since broadcom rug pulled the PCIe switch market.
crote
4 days ago
I wish someone bothered with modern bifurcation and/or generation downgrading switches.
For homelab purposes I'd rather have two Gen3 x8 slots than one Gen5 x4 slot, as that'd allow me to use a (now ancient) 25G NIC and a HBA. Similarly I'd rather have four Gen5 x1 slots than one Gen5 x4 slot, as Gen5 NVMe SSDs are readily available and even a single Gen5 lane is enough to saturate a 25G network connection and it'd allow me to attach four SSDs instead of only one.
The consumer platforms have more than enough IO bandwidth for some rather interesting home server stuff, it just isn't allocated in a useful way.
Calwestjobs
4 days ago
gen5 does not exist in x1, that is IP issue.
other issues you mention are only "firmware disabled". chipsets are capable of bifurcation, so maybe try to visit some chinese / russian firmware hacking forums... i found out that server and consumer icelake generation is fully "cracked" open on there for years. there you can find all sorts of "BIOS" generators which can generate "BIOS" of your liking.
cheapest way to do what you describe (after fiddling with your "BIOS", "firmware") is by buying NVME HBA, which are just renamed PCIE switch ICs.... ;) brand name PCIE4 switch can be bought for 1000-1200 dollars retail, it will allow you to bifurcate to x1 but im not sure about prices for pcie5.
or if you want more costly but out of box working device try looking at [ https://www.h3platform.com/product-detail/Topology/24 ]
and do not forget that newer intel PC have USB with 40 gbps... so do you really need 25 gbps eth ? linux / BSD does not care what you transfer over USB (ETH/ipv6 encapsulated in USB)...
EDIT: you can buy HBA with 8 x4 connectors. this HBA/switch is connected to your pc over PCIE4 x8... 8 times 4 = 32 =x8 port... so you get x1 speeds, you can connect only one lane on x4, etc. out of box thinking.
antonkochubey
4 days ago
installing BIOS from chinese / russian forums might be the last thing one would ever want to do, though
Calwestjobs
3 days ago
of course, it was more like "go ask to countries which does not care about IP/NDAs" they can post it freely. you can always copy what they do even without using whole code/image/bin.
FirmwareBurner
4 days ago
I'll host them on a US/EU forum. Better now?
gruez
4 days ago
>broadcom rug pulled the PCIe switch market.
What does this mean? Did they jack up prices?
nyrikki
4 days ago
Avego wanted PLX switches for enterprise storage, not low margin PC/server sales.
Same thing that Avego did with Broadcom, LSI, Brocade etc... during the 2010's, buy a market leader, dump the parts that they didn't want, leaving a huge hole in the market.
When you realize that Avego was the brand produced when KKR and Silver Lake bought the chip biz from Agilent, it is just the typical private equity play, buy your market position and sell off or shut down the parts you don't care about.
bbarnett
4 days ago
When the LSI buy happened, they dumped massive amounts of new, but ptior gen stock into the distributor channel, then immediately claimed them EOL.
Scumbags.
Calwestjobs
4 days ago
you can run your GPU in x4 and your eyes wont see difference,
EDIT: yes.