Self-hosting is being enshittified

98 pointsposted 19 hours ago
by StrLght

104 Comments

apexalpha

13 hours ago

This blog seems to be from "that side" of the self-hosting world: the homelabbers.

If you ask these people you need to buy expensive hardware and build your own datacenter at home.

I have been hosting all my services on a single Intel Nuc from 10 years ago and a RPI5 as backup for critical services like DNS.

That's it.

You'll truly be amazed at how much stuff you can actually run on very little hardware if you only have between 2 and 5 users like in a family.

Also, MinIO was always a enterprise option. It was never meant for home use. Just use SeaweedFS, Garage or so if you really want S3.

Sidenote: You do not need S3 in your house. Just use the filesystem.

KolenCh

9 hours ago

I think there’s a spectrum and you said it as if there’s only two sides.

For me personally, I built my “data centre” as cheap as possible, but there’s a few requirements that the computers you’re using would not cut it: storage server must be using ZFS with ECC. I started this around a decade ago and I only spent ~$300 at the time (reusing old PSU and case I think).

There are many requirements of a data centre that can be relaxed in a home lab settings, up time, performance, etc. but I would never trade data integrity for tiny bit of savings. Sadly this is a criteria that many, including some of those building very sophisticated home cluster, didn’t set as a priority.

winstonwinston

2 hours ago

Press X to doubt. I also use 150$ NUC for over a decade at home. At first it was ext4 then xfs, without ECC memory..

It looks to me, and i could be wrong, that many “homelabbers” upgraded from hoarding dvds to hoarding docker containers or whatever.

crapple8430

12 hours ago

A powerful enough machine (usually limited by RAM, not CPU) will let you run a hypervisor OS like Proxmox which helps a lot with making things secure and flexible. You might also want to have RAID, ECC memory. It quickly starts to make sense to build a proper home server rather than cobbling together a bunch of low end hardware. The tipping point is probably when you want more than 1-2 hard drives worth of storage.

apexalpha

12 hours ago

If you run everything on Linux you don't need VMs.

What are you putting in the VM, another Linux kernel? Why? Yeah then you need to take into account between 4GB and ~ 8GB of extra ram per VM.

I don't have RAID though I do backup to my NAS at my parents'.

But honestly a NVMe drive is basically like a CPU: it's either dead on arrival or it will just run forever.

Saris

9 hours ago

The average Linux VM I run is around 50-100MB of RAM usage. Not actually that much more than an LXC container.

There are some use cases for a VM over a container, sometimes you want better isolation (my public facing webserver runs in one), or a different OS for some reason (I run an OSX VM because its the only way to test a site in Safari).

apexalpha

7 hours ago

Ok that is a very low usage. Alpine or so?

But yeah I just restrict my webserver in an unprivileged container. Though my site is static and accepts no input whatsoever.

Saris

7 hours ago

Just a basic Debian install.

Containers also have some advantages for device passthrough, I have my Intel iGPU added into one for Immich and Frigate, can't do that with a VM unless you detach the whole GPU from the system.

notarget137

10 hours ago

Backing up entire VMs with all the configuration in case an update breaks something or just bricks your server is a smart idea aswell as running stuff in containers. Also, 4GB per VM? Besides sometimes you need to run software that is not avaliable on linux.

apexalpha

10 hours ago

If you backup the entire VM you are just backing up the Linux kernel itself and all the (GNU) tools with it.

Seems like a waste to me.

Backup your docker config and your data, that's what you actually need. The rest is just available online if you ever need it.

>Besides sometimes you need to run software that is not available on linux.

Really, like what?

Saris

9 hours ago

Good backup software deduplicates on storage. Proxmox backup server for example.

63stack

10 hours ago

I'm also going to leave my personal opinion;

You don't need ECC

You absolutely don't need proxmox, containers are good enough

It does not quickly make sense to build a proper home server

Raid1 or raid6 makes sense, but it's absolutely not a tipping point.

ivanjermakov

12 hours ago

Man, "homelab" is such a wide term. For some it's an old Android, for some it's a literal datacenter in a basement. And everything in between.

Goals are vastly different too. For some it's about hosting a few services to be free from company slop, for others it's a way to practice devops: clustering, containers, complex networking.

Seeing someone recommending Proxmox or Freenas to a beginner that just want to share family photos from an old laptop is wrong in so many ways...

zzyzxd

7 hours ago

I used to be on the side of single NUC, but when my self hosted services became important enough, I realized I need to take security and reliability seriously, you know, all the SysAdmin/SRE stuff, and that's when I started moving to "that side".

ku1ik

12 hours ago

There’s always risk of a rug pull or going the wrong direction with “open-source” software developed by a for-profit company (Plex, MinIO, Mattermost in this example).

When choosing software that I run in my “homelab” I lean towards community developed projects first. They may not always have as high quality as the ones offered by commercial entities but they’re just safer for the long term and have no artificial limits (Plex). I used to be a happy Plex customer (I have Plex Pass) but several years ago I had enough of their bullshit, switched to Jellyfin and couldn’t be happier!

apexalpha

12 hours ago

Another Plex (Pass) user here. What exactly is the bullshit you deal with?

Over the years I have been streaming all my movies and shows to as many people I want.

Plex added HDR support for transcoding, live subtitles syncing and more.

Especially the subtitle syncing is fantastic. It completely solved the problem.

ajdude

2 hours ago

Also a plex pass holder. Been using it for years, and finally pulled the trigger to buy Plex pass lifetime because I wanted to move away from flickr and love that I could automatically upload photos taken on my phone to Plex.

... not a week later they've announced that they were getting rid of that feature

Then they forced everybody into having us username accounts, change things so I couldn't just visit my media servers address directly.

I also leverage Plex for live TV but they still don't support most OTA HD channels for licensing reasons.

Then they got rid of "watch together" , which has been heavily leveraged by family and friends over the years (re-implementing it is the second highest most requested feature right now in their suggestions).

Now they have the new pricing model where you must have Plex pass or some other subscription service if you want to be able to watch stuff stored on your own media server if you're doing it outside of your local network.

It's getting frustrating and despite people begging for certain things (e.g. Watch together) they seem to just be ignoring what people are asking for and focusing on weird stuff like sharing your watch history with random people or trying to turn Plex into a social media platform.

nineteen999

9 hours ago

Not op but ugh having to update my Plex server/apps every few months after not having used it for a while, yeah having to have a subscription to the mobile app now to stream from your own server using your own phone. Don't get me wrong its great when it works but i'd prefer something more ... open.

the_snooze

18 hours ago

Maybe I'm missing something here: the great thing about self-hosting is that you choose if and when you update your back-end software. What's stopping self-hosting admins from simply staying on a known good version and forking that if they so desire?

figmert

18 hours ago

Security updates is what's stopping them often.

You also realistically can't fork things unless multiple people do, and they all stay interested in the fork.

yegle

16 hours ago

You don't have to expose your self-hosted services on the Internet to begin with. 0day bugs do exist even if you diligently apply all security updates.

em-bee

11 hours ago

making sure that your system is not exposed to the internet takes effort too. and then you realize you want to share something with friends or family, or access your home server from remote. you also want updates for new features too eventually.

the_snooze

4 hours ago

There are different degrees of "exposed to the internet." You don't need to make your self-hosted services fully accessible by anyone from everywhere. VPN, IP whitelists, mTLS, HTTP basic auth, etc. change the calculus of security and feature updates. You can afford to lag a bit behind on updates because you're not running critical enterprise infrastructure at scale.

orev

4 hours ago

Pretty much every home router, network firewall, and host-based firewall is set to deny all by default, so the effort is mostly needed to allow exposure to the Internet.

fortyseven

11 hours ago

Have the advantage of hosting content on Plex and other media servers that you can play them remotely. I can be on the other side of the Earth and still access my media. This is an extremely common use case.

63stack

10 hours ago

You can put them behind wireguard and still have all this without exposing it

wmf

18 hours ago

you choose if and when you update your back-end software

That's what we say it's about. But it's really about open source devs being our slaves forever. Get to work, Mattermost! (whip crack)

underdeserver

18 hours ago

Did you read the Github issue? These guys are paying customers.

akerl_

18 hours ago

Where are you seeing that? From what I can tell, the 10k message limit applies to "Mattermost Entry":

> Mattermost Entry gives small, forward-leaning teams a free self-hosted Intelligent Mission Environment to get started on improving their mission-critical secure collaborative workflows. Entry has all features of Enterprise Advanced with the following server-wide limitations and omissions:

https://docs.mattermost.com/product-overview/editions-and-of...

renewiltord

16 hours ago

What the fuck is this lmao? "a free self-hosted Intelligent Mission Environment to get started on improving their mission-critical secure collaborative workflows".

Sounds like some kind of parody of enterprise software.

akerl_

9 hours ago

I don’t know what you’re talking about. It’s a free tier.

underdeserver

14 hours ago

Can't edit anymore - I was wrong here.

I saw "we're happy to pay for it" and thought they were paying for it. They're not, yet.

akerl_

9 hours ago

Yea, here “we’re happy to pay for it” really means “we’re not happy to pay the price you’re charging, but maybe we’d pay if you fundamentally changed your prices or pricing model.”

wmf

18 hours ago

If so that is indeed shitty. I thought they were crippling the free tier.

underdeserver

14 hours ago

I was wrong. They were only nerfing the free self-hosted tier. I misread a few comments there.

wrxd

13 hours ago

It’s even better than that. You no longer like Plex? Alternatives like Jellyfin are there, you can use them on the same media library

sshine

12 hours ago

The Plex rug-pull from excellent software to commercial gimmick happened years ago when they removed your ability to search your personal media library.

I assumed that they were being forced by the copyright mafia, but they’re perfectly capable of making these decisions on their own.

goku12

12 hours ago

Self-hosted FOSS apps are probably the best push towards computing freedom and privacy today. But I wish that the self-hosting community moved towards a true distributed architecture, instead of trying to mimic the paradigms of corporate centralized software. This is not meant as a criticism against the current self-hosted architecture or the apps. But I wish the community focused on a different set of features that suite the home computing conditions more closely:

1. Peer-to-peer model of decentralization like bittorrent, instead of the client-server model. Local web UIs (like Transmission's web UI) may be served locally (either host-only or LAN-only) as frontend for these apps. Consider this as the 'last-mile connectivity' if you will.

2. Applications are resistant to outages. Obviously, home servers can't be expected to be always online. It may even be running on you regular desktops. But you shouldn't lose the utility of the service just because it goes offline. A great example of this is the email service. They can wait for up to 2 days for the destination server to show up before declaring a delivery failure. Even rejections are handled with retries minutes later.

3. The applications should be able to deal with dynamic IPs and NATs. We will probably need a cryptographic identity mechanism and a way to translate that into a connection to the correct end node. But most of these technologies exist today.

4. E2E encrypted and redundant storage and distribution servers for data that must absolutely be online all the time. Nostr relays seem like a good example.

The Solid and Nostr projects embody many of these ideas already. It just needs a bit more polish to feel natural and intuitive. One way to do it is to have a local daemon that acts as a gateway, cache and web-ui to external data.

moritzruth

10 hours ago

You might be interested in https://www.iroh.computer/

goku12

10 hours ago

Yeah, I have been planning to try out Iroh sometime soon. However, what I explained will take a whole lot of planning on top of Iroh. I also don't want to replicate what others have already achieved. It would be best if something could be built on top of those. Let's see how it goes.

apexalpha

12 hours ago

Sounds like you want a k3s based homelab and then connect it all with Tailscale or Netbird.

FWIW: My Intel Nuc on Ubuntu LTS with a cron apt update / upgrade job works for years without a hickup.

I have reliable electricity and internet at home, though.

goku12

12 hours ago

> Sounds like you want a k3s based homelab and then connect it all with Tailscale or Netbird.

I apologize if it was confusing. I was suggesting the exact opposite. It's not about how to build a mini enterprise cluster. It's about how to change the service infrastructure to suit the small computers we usually find at homes, without any modifications. I'm suggesting a more fundamental change.

> I have reliable electricity and internet at home, though.

It isn't too bad where I'm at, either. But sadly, that isn't the practical situation elsewhere. We need to treat power and connectivity as random and intermittent.

retrodaredevil

17 hours ago

I don't really like that "enshittified" is being used here. You could argue that Plex, MinIO or Mattermost is being enshittified, but definitely not self hosting as a whole.

Enshittification also usually implies that switching to an alternative is difficult (usually because creating a competing service is near impossible because you'd have to get users on it). That flaw doesn't really apply to self hosting like it does with centralized social media. You can just switch to Jellyfin or Garage or Zulip. Migration might be a pain, but it's doable.

You can't as easily stop using LinkedIn or GitHub or Facebook, etc.

overtone1000

15 hours ago

Ctrl+F "jellyfin" to find this excellent comment.

jghn

16 hours ago

Same. I have been using Plex for 15 years. For my personal use case, it has not changed, ever. I don't encounter any "enshittification". For my purposes it continues to be exactly what I want, just as it always was.

goku12

12 hours ago

> You could argue that Plex, MinIO or Mattermost is being enshittified, but definitely not self hosting as a whole.

That's probably not how you should interpret it. Self hosting as a whole is still a vastly better option. But if there is a significant enough public movement towards it, you can expect it to be targeted for enshittification too. The incidents related to Plex, MinIO and Mattermost should be taken as warning signals about what this may escalate into in the future. Here are the possible problems I foresee.

1. The situation with Plex, MinIO and Mattermost can be expected to happen more frequently. After a limit, the pain of frequent migration will become untenable. MinIO is a great example. Even the crowd on HN hadn't considered an alternative until then. Some of us learned about Garage, RustFS and Ceph S3 for the first time and we were debating about each of their pros and cons. It's very telling that that discussion was very lengthy.

2. There is a gradual nudge to move everything to the cloud and then monetize it. Mandatory online account for Win11, monetization of GH self-hosted runner (now suspended after backlash, I think) and cloudification of MS Office are good examples. You can expect a similar attempt on self hosted applications. Of course, most of our self-hosted software is currently open source. But if these big companies decide to embrace, extend and extinguish it, I'm not sure that the market will be prudent enough to stick with the FOSS options. Half of HN was fighting me a few days back when I suggested that we should strive to push the market towards serviceable modular hardware.

3. FOSS projects developed under companies are always at a higher risk of being hijacked or going rogue. To be clear, I'm not against that model. For example, I'm happy with Zulip's development and monetization model - ethical, generous and not too pushy. But mattermost shows where that can go wrong. Sure, they're are open source. But there are practical difficulties in easily overriding such issues.

4. At one time, we were expecting small form-factor headless computers (Plug computers [1]) like SheevaPlug and FreedomBox to become ubiquitous. That should still be an option, though I'm not sure where it's headed, given the current RAM situation. But even if they make a come back, it's very likely that OEMs will lock it down like smartphones today and make it difficult for you to exercise your choices of servers, if not outright restrict them. (If anybody wants to argue that normal people will never consider it, remember how smartphones were, before iPhone. We had a blackberry that was used only by a niche crowd.)

[1] https://en.wikipedia.org/wiki/Plug_computer

reactordev

18 hours ago

If you’re self-hosting, do you need 128GB of ram?

I suspect you don’t. I suspect a couple of beelinks could run your whole business (minus the GPU needs).

goku12

14 hours ago

What I understood is that the author is hoarding them for the future - not because there is any need for it right now. You could argue that it's too much RAM even at end of server's useful lifetime. But who knows? What if he end up running a few dozen services on it at that time?

Honestly, the problem that they're preparing for, isn't any of our fault. This is inflicted upon the world by some very twisted business models, incentives and priorities. It's hard to predict how it will all end up. Perhaps the market will be flooded with tons of RAM that will have to be transplanted onto proper DIMM modules. Or perhaps we might be scavenging the e-waste junkyard for every last RAM IC we can find - in which case, his choice would be correct.

reactordev

8 hours ago

When we were space constrained, we built smaller. When we were block constrained, we build ssd’s. When we were graphics constrained, we built gpu’s. Now that we’re memory constrained, we’ll see some advancements in this area as well. 1TB ram chips are right around the corner.

goku12

7 hours ago

The problem is the economic incentive. In all the prior cases you mentioned, their commercial interests aligned with our own - at least in part. This time however, I'm worried that they aren't concerned about burning down the world economy, at least in part, since their bottom line won't suffer for it.

For example, Micron didn't think about any alternatives for the consumer retail market. They just dumped it entirely.

reactordev

7 hours ago

If Ford stopped making cars…

You fail to grasp that if Micron decides the consumer market isn’t for them, others will gladly fill that void.

goku12

6 hours ago

The economics behind this isn't rocket science. Micron left the retail market because it is more profitable for them to supply exclusively to the hyperscalers. Not because they can't supply the retail market or because it wasn't profitable. What makes you think that any other manufacturer is going to take a different decision? Why would they choose a market that offers less than the biggest bidder?

reactordev

5 hours ago

Someone will fill the void. Another company will sell ram to the retail market if the big players won't. More manufacturing abilities will open up. It's not like ram is becoming extinct. Someone will tool up and produce them for the retail market just as other brands have done in other sectors when the market shows a void.

Last I checked I could still buy Crucial RAM chips. In time, maybe it's Kingston. Or maybe Gigastone.

ryukoposting

15 hours ago

I've been running syncthing, HA, Samba, jellyfin, and some home made stuff on a Celeron NUC with 8GB. Works fine.

reactordev

8 hours ago

With probably 7GB of headroom

some-guy

17 hours ago

I run quite a few services with a used Dell Wyse 5070 thin client PC from 2018 with 4GB of ram.

trollbridge

16 hours ago

I self-host and generally put 64GB of RAM in servers (DDR3, thankfully). Certain arrangements of Docker-based services simply chew up a lot of RAM.

boltzmann-brain

17 hours ago

> I suspect you don’t

...today.

If you're self-hosting, do you need 640K of ram?

fernie

17 hours ago

And you can upgrade in future to match your actual needs instead of wasting money trying to front load costs for no benefit.

You can buy a “lightly used” Dell Optiplex with 8gb RAM for like $40 which will cover all your self hosting needs today.

kombine

15 hours ago

What's its power consumption?

moepstar

15 hours ago

Mini PCs/uSFF with an Intel i5 6500 generally use about 8-10W with low to moderate load, WiFi disabled.

lazylizard

18 hours ago

in the last 5-10yrs...letsencrypt made ssl much easier..and its possible to host on small,cheap arm devices...

yes no more dyndns free accounts... but u can still use afraid or do cf tunnels maybe?

and in some cases nowadays u can get away with

docker-compose up

and some of those things like minio and mattermost are complaints about the free tier or complaints about self hosting? i can't tell

indeed the easiest "self hosting" ever was when ngrok happened.. u could get ur port listening on the internet without a sign up... by just running a single binary without a flag...

CuriouslyC

18 hours ago

Mattermost is infamous crippleware and they charge more than slack for a worse product if you pay. Use Zulip.

ls612

18 hours ago

Nowadays for self hosted DNS the solution I use is Pihole + Tailscale (for the Pihole DNS anywhere) if I could figure it out in one afternoon it is pretty idiot proof.

BrandoElFollito

12 hours ago

> Self-hosting has always been hard, and it's not getting easier.

Oh yes it is. I already self hosted stuff back in 2000 and it was very hard. Then came docker and it is very simple now.

Sure "very simple" mean different things to different people, but if you self host you need to know a lot already.

This is somehiw similar to amateur electronics. You used to do 100% yourself from scratch. Now you have boards and you can start in z much simpler way.

renewiltord

16 hours ago

If you want someone else's software you have to play their game. If you want software to be perfectly aligned to you, write it yourself. It turns out lots of so-called engineers are just script kiddies. They think they're doing engineering when they run someone else's `vagrant up` then get upset when the someone else upgraded from 4.1.0 to 4.1.1. Write your code. Become ungovernable.

PessimalDecimal

18 hours ago

2/3 of this article is about DRAM prices. How is that "enshittification" of self-hosting?

weikju

18 hours ago

Maybe the remaining 1/3 answers the question.

PessimalDecimal

18 hours ago

It doesn't. There's seemingly no connection between the handful of examples of self-hosting software actually getting worse, and the earlier point about hardware costs.

adastra22

17 hours ago

This is a year-in-review article. A scattering of topics is the point.

weikju

18 hours ago

I suppose writing an article title is hard. The article could be about a few different related things. The hardware and the software side of it.

That’s about all I’ll say though, not my article.

Imustaskforhelp

14 hours ago

Although the points regarding the software itself which we self host which is getting enshittified is somewhat valid, I feel like we can still see forks,migrate and many other things so I am not particularly worried about it

But the biggest thing I am worried about is the hardware prices too.

So I want to ask but is there any hardware (usually ram) which isn't getting its price increase insanely much? Perhaps refurbished or auctioned servers?

What is the best way to now get hardware which is bang for its buck? Should we even buy hardware right now or wait 3-4 years for factory production to rise and AI bubble to crash, I definitely think that ram prices will fall off very steeply (its almost a cycle in the ram business)

I am not sure but buying up small levels of compute feels like a decent idea if you are doing anything computationally expensive and of course if you have something like plex, then I suppose you have to expand on the storage part and not so much on the ram part (perhaps some encoding/decoding which could be ram intensive but I don't know)

I had gotten into the rumour that asus is ramping up chip production or smth to save hardware but it turned out to be fake so not sure how to respond but please some hardware company should definitely see this opportunity smh.

wrxd

13 hours ago

What do you want to run? Small services with an handful of users? Anything can serve them. Media libraries? As long as you have a CPU with QuickSync you’re good for on the fly transcoding and the real limiting factor becomes storage.

A TinyMiniMicro https://www.servethehome.com/introducing-project-tinyminimic... used PC is more than adequate for most workloads (except for local AI and if you want to have a huge amount of storage). Last time I checked the prices were in the ballpark of $100/$150 for a working machine.

New machines with a N series Intel CPU are in the similar ballpark.

Imustaskforhelp

12 hours ago

If I am being honest, My father is in the broadband/bandwidth business where I live and I recently told him about the fact that I was thinking of opening up a simple cloud/extremely low cost (in hardware)/mini datacenter or just thinking about it/tinkering with the software sides of these things (proxmox,incus a lot more) and all and he was interested in converting his office into a rack and he can get a lot of static ips and power supply so I was thinking something about this workflow as I had thought about it and the biggest problem to me seems to be the hardware

He is really excited for this project, he brought me newspaper clippings the other day showing that my idea has potential and other things so that's nice and I have given him the task to get his contacts in our small city for hardware, auctions and rents and try to get more information about some cheapped out specs starting out as I don't want us to invest in with a lot of hardware/investment up front but rather reinvesting the profits and maintaing a clear transparency.

Do you think we should postpone this idea till 3-4 years (I am thinking so) honestly because I would love to build my own software and I am thinking that within these years I can try more pain points of other providers and build a list of the nice features I like (If you know of any, please let me know as well as I still am making the list)

I am not trying to achieve AI purposes at all but rather simple compute (even low-end compute starting out)

Power consumption comparison isn't that much of an issue I think

Honestly I am thinking that we should wait out this cycle of rising hardware so that the hardware prices can go down in the start of the next cycle but I am interested if NUC's would be good enough for my workflow as I can redirect my father more about it because I am not that expertised about the hardware side of things so much so I would really appreciate it if you can tell me more about it/what could be the best use cases for that?

I saw from your article that chic-fil-a uses intel nucs to run their kubernetes clusters so I am assuming that it can be good enough for my use case as well?

Also, there is no guarantee that I end up doing it and its still more so an idea than anything and as I would probably do some projections to see if its worth it and a lot of other things before we get ourselfs some basic cheap equipment to even start and If we do we would probably start out with homelabbing equipment itself but just to be more clear, storage compactness isn't that big of a worry starting out as I think his office is good enough.

Honestly right now, In my understanding Ram Prices are the ones which extremely kill the project and makes me want to reconsider the software side of things (to build things myself/learn more) for a few years so that we can then build the hardware. I think this is the way to go but talking to my father and he was super excited about it I am not exactly sure but still it might give him a few years of his spare time to be more familiar with the hardware side of auctions etc. that he can find us better deals etc. too so please share any advice that you (or anyone) has about it as I would love sharing it down to my father so he that can do some queries about somethings in the local markets / his contacts as well.

crapple8430

12 hours ago

I don't really agree with this blog post; there is nothing enshittified about self-hosting.

But it does almost seems like there is a squeeze on general purpose computing from all sides, including homelab. The DRAM and SSD prices is just the latest addition to that. There's also Win 11 requiring TPM, which is not an bad thing by itself, but which will almost certainly take away the ability to run arbitrary OSes 5-10 years down the line on PCs. Or you'd still be able to boot them, but nothing will run on it without a fully trusted chain from TPM -> secure boot -> browser.

globalnode

18 hours ago

time marches forward but instead of progress we go backwards. expect to write your own software on limited resources like its 1990 again.

hsjdndvvbv

16 hours ago

Honestly maybe we could learn a thing or two from 1990's computing?

Running an OS in 16-32mb ram with GUI...

Memory management for programs...

dgeiser13

18 hours ago

I hate to tell them but everything is being enshittified.

smitty1e

10 hours ago

"Not me," said entropy, "I was already there at the outset."

Less cutely, this is an interesting topical site/newsletter => https://selfh.st

empressplay

18 hours ago

On-premming your Internet services just seems like an exercise in self-flagellation.

Unless you have a heavy-duty pipe to your prem you're just risking all kinds of headaches, and you're going to have to put your stuff behind Cloudflare anyway and if you're doing that why not use a VPS?

It's just not practical for someone to run a little blog or app that way.

nulbyte

17 hours ago

It's not that much headache, and this isn't necessarily about public-facing sites and apps.

Take file storage: Some folks find Google Drive and similar services unpalatable because they can and will scan your content. Setting up Nextcloud or even just using file sharing built into a consumer router is pretty easy.

You don't need to rely on Cloudflare, either. Some routers come with VPN functionality or can have it added.

The self-hosting most people talk about when they talk about self-hosting is very practical.

adastra22

17 hours ago

I don’t think you understand what on-premises means.

trueismywork

17 hours ago

Some of us have have LAN for our offices and TBs of data.

wltr

12 hours ago

Hey, but what’s up with DDR3?

> Even old hardware isn't safe: DDR4 prices are also affected, so that tiny ThinkCentre M720 won't save us.

Most of my home infrastructure is DDR2 or DDR3. It’s plenty fast for quite a lot of things. I really don’t care whether some background operation takes five minutes or an hour. I rather care how little energy and heat that machine produces.

WadeGrimridge

13 hours ago

this is not what enshittification means.

sn0n

16 hours ago

Sorry, but no. It's open source my guy, your ego and entitlement should be checked at the door as you enter the sandbox. Take anything you like.

Also, forking is an option, you can always use AI to keep it current.

jamilbk

18 hours ago

I don't fully understand the complaints about enshittification of open source permissively licensed software.

If the source code is available for you to fork, modify, and maintain as you see fit, what's the complaining really about?

CuriouslyC

18 hours ago

People are going to start doing this a lot more as agents improve. Most people only need a very small fraction of the features of SaaS, and that fraction is slightly different for everyone, so the economics of companies trying to use features to chase users is bad. Even worse, if you're on SaaS you can't modify the code, which will be crippling, so the whole SaaS model is cooked.

I think co-management is going to be the next paradigm.

trueismywork

18 hours ago

What's co-management?

CuriouslyC

17 hours ago

Managed services that you have some ability to modify, to customize or add functionality.

VerifiedReports

18 hours ago

I don't understand this fairly sparse "article."

"Plex added a paid license for remote streaming, a feature that was previously free. And then Plex decided to also sell personal data — I sure love self-hosted software spying on me."

How is it "self-hosted" if it's "remote streaming?" And if you're hosting it, you can throttle any outgoing traffic you want. Right?

The only other examples are Mattermost and MinIO... which I don't know much about, but again: Aren't you in control of your own host?

This article is lame. How about focusing on back-ends that pretend to support self-hosting but make it difficult by perpetuating massive gaps in its documentation (looking at you, Supabase)?

hsjdndvvbv

17 hours ago

> How is it "self-hosted" if it's "remote streaming?" And if you're hosting it, you can throttle any outgoing traffic you want. Right?

You host the plex service with your media library. Plex allows you to stream without opening up your firewall to others. Not sure now it works exactly because I never hosted it myself.

doix

17 hours ago

> Plex allows you to stream without opening up your firewall to others.

It relies on their hosted services/infrastructure. I avoid Plex for that reason. I just host my media with nginx + indexing enabled. Wireguard for creating the tunnel between the server-client and Kodi as the frontend to view the media (you can add an indexed http server as a media source).

Works great, no transcoding like Plex, but that's less of an issue nowadays when hardware accelerated decoders are common for h264 & h265.

mbirth

17 hours ago

> It relies on their hosted services/infrastructure.

Only if you want it to. Your local Plex server is always available on port 32400 - which can be opened up for others as well. But using Plex’s authentication is more convenient, of course.

doix

12 hours ago

Yeah, I was specifically talking about the "firewall" bypassing the parent mentioned (most likely combined with NAT punch-through as well). You could of course use Plex without that and use wireguard (or just make it available to the internet) and not rely on their infra.

TheCraiggers

17 hours ago

Do you have any recommendations for decoders? I've been using a fire stick for a bit but I wouldn't mind a better alternative.

VerifiedReports

16 hours ago

I use an Nvidia Shield for everything except Blu-Ray.

VerifiedReports

16 hours ago

That's exactly my point: It runs on your computer, on your LAN, serving your media. How is it doing anything outside your control, then?

adastra22

17 hours ago

Im confused. There are two different streaming things on Plex. They support streaming inside the plex app of content from the usual streaming services, much like Apple TV or your TV’s built in media manager. They also support streaming your collection across the internet to wherever you are. Which is now behind a paywall?

blahlabs

17 hours ago

I don't use Plex anymore, but not long before I cancelled my account they starting charging to access someone's library that had been shared with you if the sharing party did not have Plex pass, or something to that effect.

boltzmann-brain

17 hours ago

> This article is lame. How about focusing on back-ends that pretend to support self-hosting but make it difficult by perpetuating massive gaps in its documentation (looking at you, Supabase)?

that's one way of enshittifying, but what the article talks about is nonetheless very important.

People rely on projects being open source (or rather: _hosted on github_) as some sort of mark of freedom from shitty features and burdensome monetization.

As the examples illustrate, the pattern of capturing users with a good offering and then subsequently squeezing them for money can very easily be done by open source software with free licenses. The reason for that is that source code being available is not, alone, enough to ensure not getting captured by adversarial interests.

What you ALSO need is people wanting to put in the work to create a parallel fork to continuously keep the enshittification at bay. Someone who rolls a distribution with a massive amount of ever-decaying patches, increasingly large amounts of workarounds, etc. Or, alternatively, a "final release" style fork that enters maintenance mode and only ever backports security vulnerability fixes. Either of those is a huge amount of work and it's not even sure that people will find that fork on their own rather than just assume "things are like that now".

Given that the code's originating corporation can and will eagerly throw whole teams of people at disabling such efforts, the counter-efforts would require the same amount of free labor to be successful - or even larger, given that it's easy to wreck things for the code's originator but it's difficult to fix them for the restoration crew.

This pattern, repeated in many projects over the decades since GPL2 and MIT were produced, displays that merely being free and open source does not create a complete anti enshittification measure for the end user. What is actually necessary is a societal measure, a safety web made up of developers dedicated to conservation of important software, who would be capable of correcting any stupid decisions made by pointy-haired managers. There are some small projects like this (eg Apache, and many more) but they are not all-encompassing and many projects that are important to people are without such a safety net.

So for this reason, eg when people are upset that mattermost limits the messages to 10000, their real quarrel isn't really even with the scorpion, who is known to sting, it is with the lack of there being a social safety net for this particular software. Their efforts would be well spent on rapidly building such a safety network to quickly force the corporation's hand into increasingly more desperate measures, accelerating their endgame and self-induced implosion. Then, after the corpo's greed inevitably makes them eat themselves in full, the software can enter the normal space of FOSS development rather than forever remain this corporate slave-product that is pact-bound to a Delaware LLC by a chain of corporate greed.

Only once any free fork's competition backed by VCs burning their money on a ceremonial heap has been removed can the free version of the software become the central source for all users and therefore become successful, rather than continuously play catch up with a throng of H-2B holders.

VerifiedReports

2 hours ago

I don't doubt the validity of those sentiments. I'm just perplexed (see what I did there) as to what the specific issue is in this case because I thought Plex was for streaming stuff from one side of your house to the other (and/or playing it back).