FusionX
2 hours ago
Given how the blog is presented, I assumed this was something novel that solved a unique problem, maybe a local multi-modal assistant for your device.
I installed it and it's none of that. It is a mere wrapper around small local LLM models. And, it's not even multi-modal! Anyone could've one-shotted this in Claude in an hour (I'm not exaggerating).
What's the target audience here? Your average person doesn't care about the privacy value proposition (at least not by severely sacrificing chat model's quality). And users who do want that control can already install LMStudio/Llama.cpp (which is dead simple to setup).
The actual release product should've been what's described in "What's next" section.
> Instead of general chat, we shape Ensu to have a more specialized interface, say like a single, never-ending note you keep writing on, while the LLM offers suggestions, critiques, reminders, context, alternatives, viewpoints, quotes. A second brain, if you will.
> A more utilitarian take, say like an Android Launcher, where the LLM is an implementation detail behind an existing interaction that people are already used to.
> Your agent, running on your phone. No setup, no management, no manual backups. An LLM that grows with you, remembers you, your choices, manages your tasks, and has long-term memory and personality.
post-it
2 hours ago
> Anyone could've one-shotted this in Claude in an hour
I think they did. If you start the download and then open the sidebar and/or background the app, the download progress bar disappears and is replaced by the download button. If you press the download button again, the progress bar reappears at the correct point.
I find that Claude often makes little statefulness mistakes like that. Human developers do too, but the slower and more iterative nature of human development makes it more likely that that would get caught.
jubilanti
2 hours ago
> Anyone could've one-shotted this in Claude in an hour (I'm not exaggerating).
This probably could have been one-shotted with Sonnet, not even Opus. Given how over indexed they are on LLM coding, Haiku might even be able to do it.
This is actually an interesting coding model benchmark task now that I think about it.
ttul
33 minutes ago
I hate to say it, but this looks like the sort of thing a CEO told their team to build on Monday morning in a panic because they are grasping for ways to participate in the AI craze. And the team did just that: they built it that morning using Claude Code.
There is truly nothing original here and the product doesn't have a chance in hell of earning money. Local LLMs on-device will be dominated by the device vendors, whose control of the hardware stack combined with their ability to subsidize billions of dollars of machine learning research gives them an unfair advantage. Apple knows what the next generation of silicon will deliver, and their ML engineers are already hard at work building models that will be highly optimized for that silicon a year or two ahead of time. Open source models are really great and are backed by well funded labs; however, delivering these models on-device in a way that pleases users will never be easier than it is for the vendors of the devices.
Plus, device vendors have ways of making money from local LLMs that third-party app providers do not. They can make their local LLM free and earn money on the hardware play, without skipping a beat on the billions of dollars of ongoing R&D. I don't see how third party app vendors make money here when they will be competing with the decent, totally free alternative that Apple and Google (and Samsung etc.) will load on in the next year or two.
Barbing
17 minutes ago
Wanted to share a message here with the CEO not to feel too bad because little is more common than getting caught by this tech
But where are they! https://ente.com/about
Small team, rooting for them
ttul
5 minutes ago
I write my comment with admiration for founders, because I am one. That being said, chasing trends without paying attention to the steamroller has killed more than one very good company and I have plenty of scar tissue as evidence...
reactordev
an hour ago
It’s a platform play so they can get people onto their defunct photos platform. “Big Tech” in a little suit.
fauigerzigerk
6 minutes ago
How is Ente Photos defunct? It's getting new features all the time and it works extremely well for me.
buster
9 minutes ago
What do you when you say defunct photos platform?
Barbing
16 minutes ago
All about immich now right?
kylehotchkiss
an hour ago
Local LLM options for less technical people is worth celebrating IMO. No, not "anybody" can have one-shotted this in CC in an hour.
We have not seen a tidal wave of untechnical people vibe coding up their own software solutions.
jermaustin1
25 minutes ago
> We have not seen a tidal wave of untechnical people vibe coding up their own software solutions.
When my little brother who is a drummer, and has never even looked at "code" before, had claude on-shot a python app that let him download songs on youtube, extract the stems, collect tempo/key/etc information, then feed that into his DAW, all without ever looking at code, knowing what any of it did, etc., I knew that we were about to see a LOT of single-use applications.
I'm not against it, honestly. I have always written little one-off scripts and apps that accomplished something faster than manually, now that those one-shots are possible with an LLM in seconds sometimes, it makes all my personal scripts so much easier... that said, I definitely read the scripts that are output, and attempt to step through them in a debugger before assuming it is all good.
nozzlegear
19 minutes ago
> We have not seen a tidal wave of untechnical people vibe coding up their own software solutions.
Have we? Outside of the marketing material from AI companies and various AI-adjacent CXOs on Twitter furiously tweet storming their forays into vibe coding with whatever product they're hawking, I struggle to think of any real world examples that would merit being called a tidal wave of non-technical people.
I do agree that more local LLM options are always better.
Barbing
16 minutes ago
>not
collabs
29 minutes ago
To add to this, the value that ente or someone like that can bring to the table here is a firm pledge to improve it and maintain it going forward.
That to me is more valuable than code vibe coded by Claude in one afternoon.
moffkalast
33 minutes ago
Probably just another ollama-type service who wants to slide themselves in between the user and local models, so they can take all the credit and work on convenience based platform lock-in, then later introduce paid tiers.