Skyy93
6 hours ago
This article makes no real sense to me.
>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.
This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.
munificent
3 hours ago
> This was the same before, if you had a novel idea and make a product out of it others follow.
The article says:
"Ideas are cheap - execution is hard"
"Announcing, signaling your ideas offered much greater benefit than risk, because your value multiplied by connections, and execution was the moat you could stand behind."
That's the key difference. It used to be much harder for a competitor to catch up to the state of your implementation.
middayc
3 hours ago
And it's not just that the execution is faster now. The competition saw the "outer shell" of your idea. But LLM platforms (the forest) - they see the internals, if you used them to explore and develop it. They also see all similar ideas across the globe.
And they own - not rent the compute and models - as you do from them. If we want to extend this, they could "pre-cog" your idea and build it even before you do.
I'm not talking about what is happening now, I'm just playing out the thought experiment.
Nevermark
3 hours ago
Sharing any novel idea has never been so costly.
I am not arguing against sharing. Sharing can be for the greater good.
But as you note, things have changed. We could reasonably assume a genuinely significant good idea, set free, might go in the direction we shoved it for a minute. Or fade into inaccessibility.
Not any more.
mekoka
2 hours ago
You seem to be agreeing, not arguing, with the person you're replying to.
cryptonector
28 minutes ago
Indeed. So?
6510
an hour ago
Just a side note:
> "Ideas are cheap - execution is hard"
I would argue this mantra says more about the person repeating it. It simply means the person has no good ideas and is bad at execution.
I've not met many but I'm sure there are many out there who are scary good at execution. Something like 1% transpiration, 99% experience. I can have a designer do a 100 euro design, hire someone to write nice code, rent a factory or an office, I might even be able to buy the machines at a good price. What I cant do is spin the rolodex and (in 20 minutes) land enough clients who would absolutely love to work with me again. I cant find those private meetings and wouldn't be able to extend my reputation with the new project.
People with good ideas don't talk about them unless it is required. They don't talk with "ideas are cheap" people, it's pearls for pigs. You can spot some of them if they did bursts of multiple unrelated complex patents. My favorite are the rube Goldberg type of machines that combine well known things in ways that exceed the sum of the parts. Something like step 5 uses the vibrations from step 1 while step 3 uses the heat from step 6.
To have good ideas you need many of them but you also need to know execution or you end up thinking the easy stuff is hard and the hard stuff is easy. Improvement is unlikely from there.
cryptonector
17 minutes ago
> Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough.
First of all: it's not as though no new LLMs are being trained. Of course they are.
Second: learning LLMs are not far off, and since they can typically search the web via agents, they effectively can "learn" now, and they can learn (not so well) by writing stuff into a document hidden from you. Indeed, some LLMs can inspect your other sessions with them and refer to them in future sessions -- I've noticed this with Claude.
Third: already we see some AI companies wanting to train their models on your prompts. It's going to happen.
> The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
There's a pretty good chance that LLMs buff open source, yes.
> > We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
> Why should this happen? The moment you make your idea public, anyone can build it. [...]
This was always the case, but now the cycle is faster. Therefore if you must use an LLM you might use an LLM that you run on your own hardware -- now your prompts are truly yours. But as TFA notes the AIs will learn just from your (and your private LLMs') searches, and that will be enough in some cases for them to figure out what you're up to. Oh sure, maybe the Microsofts and Googles of the world will not be able to capitalize on millions of interesting idea floating about, but still! the moment you uncloak the machine will eat your future alive, so you'll try to stay off its radar and build a moat it can't see (good luck!). Well that's what TFA says; it seems very plausible to me.
RajT88
6 hours ago
> This was the same before, if you had a novel idea and make a product out of it others follow.
You've almost captured the full picture of it.
If you have a great idea, it's not going to be self-evidently enough of a great idea until you've proved it can make money. That's the hard part which comes at great personal, professional and financial risk.
Algorithms are cheap. Sure, they could use your LLM history to figure out what you did. Or the LLM could just reason it out. It could save them some work, sure.
But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
cryptonector
14 minutes ago
> But again - the hard part is not cloning the product, it's stealing your customers.
Yes. A Red Hat, a Microsoft -- these companies have processes, organizational structure, politics, friction, etc. They might like your products, but replicating them might not be easy for reasons that have nothing to do with how easy it is given the freedom to do it. Small shops with vision might well have a bright future, for a while, maybe.
annie511266728
an hour ago
I don't think the risk is that they copy your app.
The risk is that they make the category a built-in feature in something people already use. At that point, copying the product and taking the customers start to look like the same problem.
oh_my_goodness
5 hours ago
Yeah, and the big guys can't steal your customers. What a crazy idea.
RajT88
5 hours ago
The point is - they're going to do that anyways if they want to. Owning the LLM platforms makes it marginally cheaper to do so.
It's not the risk it's being made out to be.
oh_my_goodness
4 hours ago
Absolutely. The fact that they know your app better than you do, and that they can revoke your ability to develop it at any moment, those are just details. Those things won't change the game at all.
satvikpendem
4 hours ago
Unless you're using their API (in which case there's always platform risk, same as before), this is not an issue. There are lots of half assed implementations of ideas by the big companies that smaller companies run circles around, Innovator's Dilemma was literally written about this.
oh_my_goodness
3 hours ago
In my opinion Christensen wasn't talking about outsourcing your entire development process to a competitor with much deeper pockets, giving them the ability to turn off your development at will [1], and then running rings around them. I'm sure you're familiar with his story about Dell and Asus. This is worse.
[1] Unless you're assuming that you maintain control over your technology while outsourcing most of the development thinking to a rented AI? Times have changed, and the API is not the only issue anymore.
satvikpendem
3 hours ago
What is the issue? Local models still exist and will continue to exist, and even if they don't, good old fashioned hand coding will never go away. The point is even AI companies are run by people and one company cannot make every product well, there are always gaps in the market that are exploitable.
hrimfaxi
5 hours ago
> But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
Big companies seem to be bad at innovating but really, really good at enterprise sales.
zar1048576
3 hours ago
I don’t know if that’s necessarily true. I do think that a big part of enterprise sales involves building a comprehensive solution that works well within the customer’s ecosystem. Start-ups usually tend to build point products, which have value, but are still missing functionality (even if that functionality is not scintillating) that customers really desire to easily deploy and maintain solutions. Also, customers do care about things like stability of their vendors and the level of available support.
RajT88
an hour ago
I've seen big companies manage to duplicate startup-like culture with small teams internally. Weird things like directors handling builds and source control duties. 12 hour days, working weekends.
These teams said that per man-hour they brought more value to the company than any other team. (But you know, they all say that)
8bitsrule
an hour ago
> This was the same before, if you had a novel idea and make a product out of it others follow.
March 20, 1926: Hungarian physicist, electrical engineer Kalman Tihanyi applies for his first patent for a fully electronic television system. Tihanyi's ideas are so essential that, in 1934, RCA is required to buy his patents.
Kalman who ?
middayc
5 hours ago
> This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
You have a point about the update intervals and the higher speed they provide to developers. But you are talking about now, and I was making a thought experiment - about a potential future. LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user. So in a world where available training data is drying up, nobody is throwing all this away. Gemini even has direct upvote/downvote on responses. Algorithms will probably improve, and the intervals will probably shorten.
Given the detailed information that all the back and forwards generate - I think it's not hard to use similar technology to track underlying trends, get all the problems associated with them and all the solution space that is talked about - and generate the solution before even the ones who thought of it release it. Theoretically :)
I think the open development will become less open. I don't like it - but I think it's already happening. First - all the blogs and forums moved to specialized platforms (SO, discords, ..) and now event some of those are d(r)ying. If people (in extreme cases) don't even read the code they produce, why would they read about the code, discuss the code, that's not even in their care. That is without the theoretical fear of the global Borg slurping all they write.
Dusseldorf
an hour ago
> LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user.
Seems like this is hard to reliably do across the board. Sometimes when I stop interacting it's because it nailed the solution, and sometimes it's because it went so poorly that I opted to bin it and do it myself. Maybe all of the mid conversation planning and feedback is enough though.
tayo42
2 hours ago
>Especially for LLMs, they are not (till now) learning on the fly.
Was this just awkward phrasing or did something change and they learn after training?
Dusseldorf
an hour ago
There have been several projects lately attempting to create running context/memory, and Claude Code also has some concept of continuous conversational memory, but all if these are bolted at inference time, there's still no concept of conversations feeding back into base model training/weights on the fly.