And if you’re willing to pay $195 for the package and upgrades of $50 each time the OS breaks the app, we are good. And as long as this software doesn’t have to sync across mobile and desktop. And as long as this is single user software.
Then sure.
Take Microsoft Office, if you just need Word, Excel, PowerPoint, Outlook and OneNote, that's the same price as a two year subscription to Office 365 / Microsoft 365 / CoPilot. That's potentially a good deal, depending on your needs.
Some people have been running Office 2003 on everything from Windows XP to Windows 10. Assuming they bought the license 22 years ago, that's pretty cheap, probably $15 per year. As a bonus, they've never had their workflow disrupted.
On the other hand, theres 17+ years of security updates missing.... and office macros are a known vector.
You shouldn't upgrade the OS on a machine with important paid-for software which you are using.
IMHO, this overlooks probably the biggest advantage of all: software you can buy once and run locally is predictable.
While the modern world of mobile devices and near-permanent fast and reliable connectivity has brought some real advantages, it has also brought the ability for software developers to ruthlessly exploit their users in ways no-one would have dreamt of 20 or 30 years ago. Often these are pitched as if they are for the user’s benefit — a UI “enhancement” here, an “improved” feature there, a bit of casual spying “to help us improve our software and share only with carefully selected partners”, a subscription model that “avoids the big up-front cost everyone used to pay” (or some questionable logic about “CAPEX vs OPEX” for business software), our startup has been bought by a competitor but all the customers who chose our product specifically to avoid that competitor’s inferior alternative have nothing to worry about because they have no ulterior motive and will continue developing it just the way we have so far.
The truth we all know but don’t want to talk about is that many of the modern trends in software have been widely adopted because they make things easier and/or more profitable for software developers at the direct expense of the user’s experience and/or bank account.
I think you missed the most important thing, more than any of those, if there is no service, then the service cannot delete or deny access to your data.
And related, if there is no service, then the service cannot fail to secure your data.
No possibility of a billing or login error, where you lose access to your stuff because they just think your're not current or valid when you are.
No possibility of losing access because the internet is down or your current location is geo blocked etc.
No possibility of your account being killed because they scanned the data and decided it contained child porn or pirated movies or software or cad files to make guns or hate speech etc.
Those overlap with free and privacy but the seperate point is not the money or the privacy but the fact that someone else can kill your stuff at any time without warning or recourse.
And someone else can lose your stuff, either directly by having their own servers broken into, or indirectly, by your login credentials getting leaked on your end.
> the service cannot delete or deny access to your data... the service cannot fail to secure your data.
You, on the other hand, are much more likely to do one or both of these things to yourself.
If we're tallying such things up, I've had those sorts of major issues with Trello, Google, Navionics, FitBit, FixD, and a number of other companies over the years. My local data on the other hand has had zero issues.
I will never decide that my kids doctor photos are child porn and delete all my own access to both my email and phone number to access my own bank and retirement accounts.
The fact that a hard drive can break and you can fail to have a backup is not remotely in the same class of problem of living at the whim of a service provider.
I’ve seen this happen in a previous job where the IT team of a customer deleted years worth of financial records because they didn’t know about it when they cleaned up the server. Our CEO had to go and help them rebuild their books, at great cost to the customer!
All of those advantages are also reason why businesses don’t adapt it…
It's open source, you "just" have to help write it!
This entire paradigm gets turned on its head with AI. I tried to do this with purely local compute, and it's a bad play. We don't have good edge compute yet.
1. A lot of good models require an amount of VRAM that is only present in data center GPUs.
2. For models which can run locally (Flux, etc.), you get dramatically different performance between top of line cards and older GPUs. Then you have to serve different models with different sampling techniques to different hardware classes.
3. GPU hardware is expensive and most consumers don't have GPUs. You'll severely limit your TAM if you require a GPU.
4. Mac support is horrible, which alienates half of your potential customers.
It's best to follow the Cursor model where the data center is a necessary evil and the local software is an adapter and visualizer of the local file system.
Define "good edge compute" in a way that doesn't have expectations set by server-based inference. I don't mean this to sound like a loaded request - we simply can't expect to perform the same operations at the same latency as cloud-based models.
These are two entirely separate paradigms. In many instances it is quite literally impossible to depend on models reachable by RF like in an ultra-low power forest mesh scenario for example.
We're in agreement that not all problem domains are amenable to data center compute. Those that don't have internet, etc.
But for consumer software that can be internet connected, data center GPU is dominating local edge compute. That's simply because the models are being designed to utilize a lot of VRAM.
> This entire paradigm gets turned on its head with AI. I tried to do this with purely local compute, and it's a bad play. We don't have good edge compute yet.
Your TV likely has a good enough CPU to run a decent model for home automation. And a game console most definitely does.
I'd love to see a protocol that would allow devices to upload a model to a computer and then let it sleep until a command it received. Current AI models are really self-contained, they don't need complicated infrastructure to run them.