Neil44
10 hours ago
Samsung makes fast expensive storage but even cheap storage can max out SATA, hence there's no point Samsung trying to compete in the dwindling SATA space.
mwambua
10 hours ago
Does this mean that we'll start to see SATA replaced with faster interfaces in the future? Something like U.2/U.3 that's currently available to the enterprise?
Aurornis
8 hours ago
The first NVMe over PCIe consumer drive was launched a decade ago.
It's hard to even find new PC builds using SATA drives.
SATA was phased out many years ago. The primary market for SATA SSDs is upgrading old systems or maybe the absolute lowest cost system integrators at this point, but it's a dwindling market.
Fire-Dragon-DoL
4 hours ago
It's for hdds. We still use those for massive storage
zamadatix
9 hours ago
NVMe via m.2 remains more than fine for covering the consumer SSD use cases.
zokier
9 hours ago
Problem is that you only get pitiful amount of m2 slots in mainstream motherboards.
Night_Thastus
9 hours ago
A lot of modern boards come with 3 or more - that's what mine has. And with modern density, that's a LOT of storage. I have two 4TB drives!
You could even get more using a PCIe NVME expansion card, since it's all over PCIe anyways.
tracker1
3 hours ago
My desktop motherboard has 4... not sure how many you need, even if 8tb drives are pretty pricey. Though actual PCIe lanes in consumer CPUs are limited. If you bump up to ThreadRipper, you can use PCIe to M.2 adapters to add lots of drives.
zamadatix
6 hours ago
On top of what the others have said, any faster interface you replace SATA with will have the same problem set because it's rooted in the total bandwidth to the CPU, not the form factor of the slot.
E.g. going to the suggested U.2 is still going to net you looking for the PCIe lanes to be available for it.
wtallis
9 hours ago
Three is not pitiful. Three is plenty for mainstream use cases, which is what mainstream motherboards are designed for.
ComputerGuru
8 hours ago
We used to have motherboards with six or twelve SATA ports. And SATA HDDs have way more capacity than the paltry (yet insanely expensive) options available with NVMe.
wtallis
8 hours ago
We used to want to connect SSDs, hard drives and optical drives, all to SATA ports. Now, mainstream PCs only need one type of internal drive. Hard drives and optical drives are solidly out of the mainstream and have been for quite a while, so it's natural that motherboards don't need as many ports.
justsomehnguy
7 hours ago
> Now, mainstream PCs only need one type of internal drive
More so it would only need one drive. ODD is dead for at least 10 years and most people never need another internal drive at all.
tracker1
3 hours ago
Still use ODD for ripping... that said, I'm using a USB3 BRW drive and it's been fine for what I need.
mgerdts
8 hours ago
This article is talking about SATA SSDs, not HDDs. While the NVMe spec does allow for MVMe HDDs, it seems silly to waste even one PCIe lane on a HDD. SATA HDDs continue to make sense.
ComputerGuru
6 hours ago
And I'm saying assuming that m.2 slots are sufficient to replace SATA is folly because it is only talking about SSDs.
And SATA SSDs do make sense, they are significantly more cost effective than NVMe and trivial to expand. Compare the simplicity, ease, and cost of building an array/pool of many disks comprised of either 2.5" SATA SSDs or M.2 NVMe and get back to me when you have a solution that can scale to 8, 14, or 60 disks as easily and cheaply as the SATA option can. There are many cases where the performance of SSDs going over ACHI (or SAS) is plenty and you don't need to pay the cost of going to full-on PCIe lanes per disk.
wtallis
6 hours ago
> And SATA SSDs do make sense, they are significantly more cost effective than NVMe
That doesn't seem to be what the vendors think, and they're probably in a better position to know what's selling well and how much it costs to build.
We're probably reaching the point where the up-front costs of qualifying new NAND with old SATA SSD controllers and updating the firmware to properly manage the new NAND is a cost that cannot be recouped by a year or two of sales of an updated SATA SSD.
SATA SSDs are a technological dead end that's no longer economically important for consumer storage or large scale datacenter deployments. The one remaining niche you've pointed to (low-performance storage servers) is not a large enough market to sustain anything like the product ecosystem that existed a decade ago for SATA SSDs.
dana321
7 hours ago
its not enough if you have four ssds each with 4tb for instance
zamadatix
6 hours ago
Is it not fair to say 4x4 TB SSD is an example of at least a prosumer use case (barrier there is more like ~10 before needing workstation/server gear)? Joe Schmoe is doing on the better half of Steam gamers if he's rocking a 1x2 TB SSD as his primary drive.
razster
8 hours ago
The MSI motherboard I use has 3, and with the PCIe expansion card installed, I have 7 m.2's. There are some expansion cards with 8 m.2 slots. You can also get SATA to m.2 devices, or my fav is USB-c drives that hold 2 m.2. Getting great speeds from that little device.
Aurornis
8 hours ago
Most consumer motherboards have 2-3 M.2 slots.
You can buy cheap add-in cards to use PCIe slots as M.2 slots, too.
If you need even more slots, there are add-in cards with PCIe switches which allow you to install 10+ M.2 drives into a single M.2 slot.
barrkel
9 hours ago
It's more likely that third party integrators will look after the demand for SSD SAS/SATA devices, and the demand won't go away because SAS multiplexers are cheap and NVMe/PCIe is point to point and expensive to make switching hardware for.
Likely we'd need a different protocol to make scaling up the number of high speed SSDs in a single box to work well.
0manrho
7 hours ago
SATA just needs to be retired. It's already been replaced, we don't need Yet Another Storage Interface. Considering consumer IO-Chipsets are already implemented in such a way that they take 4 (or generally, a few) upstream lanes of $CurrentGenPCIe to the CPU, and bifurcating/multiplexing it out (providing USB, SATA, NVMe, etc I/O), we should just remove the SATA cost/manufacturing overhead entirely, and focusing on keeping the cost of keeping that PCIe switching/chipset down for consumers (and stop double-stacking chipsets AMD, motherboards are pricey enough). Or even just integrating better bifurcation support on the CPU's themselves as some already support it (typically via converting x16 on the "top"/"first" PCIe slot to x4/x4/x4/x4).
Going forward, SAS should just replace SATA where NVMe PCIe is for some reason a problem (eg price), even on the consumer side, as it would still support existing legacy SATA devices.
Storage related interfaces (I'm aware there's some overlap here, but point is, there's already plenty of options, and lots of nuances to deal with already, let's not add to it without good reason):
- NVMe PCIe
- M.2 and all of it's keys/lengths/clearances
- U.2 (SFF-8639) and U.3 (SFF-TA-1001)
- EDSFF (which is a very large family of things)
- FibreChannel
- SAS and all of it's permutations
- Oculink
- MCIO
- Let's not forget USB4/Thunderbolt supporting Tunnelling of PCIe
Obligatory: https://imgs.xkcd.com/comics/standards_2x.png
saltcured
6 hours ago
I think it's becoming reasonable to think consumer storage could be a limited number of soldered NVMe and NVMe-over-M.2 slots, complemented by contemporary USB for more expansion. That USB expansion might be some kind of JBOD chassis, whether that is a pile of SATA or additional M.2 drives.
The main problem is having proper translation of device management features, e.g. SMART diagnostics or similar getting back to the host. But from a performance perspective, it seems reasonable to switch to USB once you are multiplexing drives over the same, limited IO channels from the CPU to expand capacity rather than bandwidth.
Once you get out of this smallest consumer expansion scenario, I think NAS takes over as the most sensible architecture for small office/home office settings.
Other SAN variants really only make sense in datacenter architectures where you are trying to optimize for very well-defined server/storage traffic patterns.
Is there any drawback to going towards USB for multiplexed storage inside a desktop PC or NAS chassis too? It feels like the days of RAID cards are over, given the desire for host-managed, software-defined storage abstractions.
Does SAS still have some benefit here?
wtallis
5 hours ago
I wouldn't trust any USB-attached storage to be reliable enough for anything more than periodic incremental backups and verification scrubs. USB devices disappear from the bus too often for me to want to rely on them for online storage.
saltcured
4 hours ago
OK, I see that is a potential downside. I can actually remember way back when we used to see sporadic disconnects and bus resets for IDE drives in Linux and it would recover and keep going.
I wonder what it would take to get the same behavior out of USB as for other "internal" interconnects, i.e. say this is attached storage and do retry/reconnect instead of deciding any ephemeral disconnect is a "removal event"...?
FWIW, I've actually got a 1 TB Samsung "pro" NVMe/M.2 drive in an external case, currently attached to a spare Ryzen-based Thinkpad via USB-C. I'm using it as an alternate boot drive to store and play Linux Steam games. It performs quite well. I'd say is qualitatively like the OEM internal NVMe drive when doing disk-intensive things, but maybe that is bottlenecked by the Linux LUKS full-disk encryption?
Also, this is essentially a docked desktop setup. There's nothing jostling the USB cable to the SSD.