Zagreus2142
3 days ago
Can someone give the counter argument to my initial cynical read of this? That read being: OpenAI has more money than it can invest productively within it's own company and is trying to cast a net to find new product ideas via an incubator? I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this and it implies they have run out of ideas internally. But I think I'm probably being too reflexively cynical
AnEro
3 days ago
I think that MIT study of 95% of internal AI projects failing has scared off a lot of corporations from risking time in it. I think they also see they are hitting a limit of profitable intelligence from their services. (with the growth in inelegance the past 6–8 months being more realistic, not the unbelievable like in the past few years)
I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
Workaccount2
3 days ago
The MIT study found 90% of workers were regularly using LLMs.
The gap was that workers were using their own implementation instead of the company's implementation.
keeda
3 days ago
The MIT study as released also does not really provide any support for the 95% failure rate claim. Until we have more details, we really don't know where that number came from:
https://www.linkedin.com/feed/update/urn:li:activity:7365026...
AnEro
3 days ago
Yea from what I understand 'Chats' and AI coding are something they already have market domination/are a leader on and are a good/okay product. It's the other use cases they haven't delievered on in terms of other companies using them as a platform to deliver AI apps, which I would imagine would have been a huge vertical in their pitches to investors and internal plans.
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
dingnuts
3 days ago
[flagged]
AnEro
3 days ago
AI coding is mid(okay) yes, my main point is people use it and it's a good line of business right now for them. They expected bigger break throughs like gpt-2 to 3 to 4, and that's not happening so they have to lean on the other aspects of the business more.
The fact it is mid is why they are really needing all the other lines of business to work. AKA selling tokens to AI apps the specialize in other mid products, and limit the snakeoil AI products that are littering the market ruining AI's image of being the new catch all solution.
keeda
3 days ago
I was a big user of IntelliSense and more heavily, IntelliJ, for most of my career. It truly seemed like magic back then. I recall telling a colleague who preferred Emacs that it felt like having an editor that could read your mind, and would joke that my tab key was getting worn out.
Then I discovered LLMs.
If you think IntelliSense is comparable to what LLMs can do, you really, really need to try giving an AI higher-level problems to solve. Throwaway example I gave in a similar thread a few weeks ago: https://news.ycombinator.com/item?id=44892576
I think a big part of simonw's shtick is trying to get people to give LLMs a proper try, and TBH that's what I end up doing a lot too, including right now! The problem is a "proper try" takes dedicated effort, because it's not obvious where the AI will excel or fail for your specific context, and people legitimately don't have enough time for that.
But once you figure it out, it feels like when you first discovered IntelliSense, except you already know IntelliSense, so it's like... IntelliSense raised to the power of IntelliSense.
skydhash
3 days ago
The things is that languages that need intellisense that much are language that made it too easy to construct complex systems. For lisp and C, you can get autocompletion for free, and indexing to offer docs preview and signature can be done quite easily as well. There's also an incentive to keep things short and small.
Then you have Java and C# where you need a whole IDE if you're writing more than 10 lines. Because using anything brings the whole jungle with it.
keeda
3 days ago
Hmm, I think all languages, regardless of verbosity, could be better with IntelliSense. I mean, if the IDE can reliably predict what you intend to type based on the context, regardless of the complexity of the application involved, why not have it?
Seems like languages like Java and C# that encourage more complexity just aim to provide richer context to mine. Simple example, given an incomplete line like "TypeA foo = bar.", the IDE can very easily figure out you want "bar.getBlah(baz)" because getBlah has a return type of "TypeA" and "baz" is the only variable available in the scope. But to have all that context at that point requires a whole bunch of setup beforehand, like a fine-grained types supported by a rich type system and function signatures and so on, which incentivizes verbosity that usually scales with the complexity of the app.
So yes, that's a lot of verbosity, but also a lot of context. To your point, I feel like the philosophy of languages like Java and C# is deliberately based on providing enough context for sophisticated tooling like IntelliSense and IntelliJ.
Unfortunately, the languages came before such sophisticated tooling existed, and when good tools did exist they were expensive, and even with those tools now being widely and freely availble, many people still don't use them. (Plus, in retrospect, the language designs themselves genuinely turned out to be more complex than ideal in some aspects.)
So the current reputation of these languages encouraging undue complexity is probably due to their philosophies being grounded in sound reasoning but based on predictions that didn't quite pan out as expected.
skydhash
3 days ago
The thing is we did have nice tooling before those languages came to be. If you look at Smalltalk, it has this type of context in an even more powerful way. You can browse the whole library in a few click and view its code. And it has a Playground element where you can try and design stuff. And everything was inspectable.
Same with Lisp. If you take emacs has an example, you have instant documentation on every functions. Another example can be python where there’s an help system embedded into the language.
Java is basically unwritable without a full indexer and completion. But it has a lot of guardrails and its verbosity discourages deviation.
And today we have Swift and kotlin which is barely better. They do a lot of magic behind the scene to reduce verbosity, but you’re still reliant on the indexer which is now coupled with a compiler for the magic stuff.
Better languages insists on documentation, contextual help, shorter programs, no magic unless created by the programmer, and visibility (inspection with a debugger and traceability with the system source available, if possible).
keeda
a day ago
I never used SmallTalk but from what I heard about it, I feel like Java/C# etc were a deliberate push towards that kind of environment via IDEs. I am not sure why SmallTalk didn't catch on, but it may have something to do with resistance from the C++ programmers that Guy Steele mentioned they had to drag towards Lisp via Java. It seems to me that the current crop of languages is the result of this forced evolution of a reluctant developer market from relatively barebones languages like C/C++ towards a SmallTalk-like future.
r0m4n0
3 days ago
I think it’s more like Open AI has the name to throw around and a lot of credibility but not products that are profitable. They are burning cash and need to show a curve that they can reach profitability. Getting 15 people with 15 ideas they can throw their weight behind is worth a lot
chrishare
3 days ago
Yeah, more or less. Being in the application space as well as the inference space hedges a variety of risks, that inference margins will squeeze, that competition will continue to increase, etc etc.
r0m4n0
2 days ago
Yea and if you look at all of the job openings they have right now, they are mostly in the “applied AI” space which is a very different thing from what they have been doing altogether. This is mostly generic enterprise development which is how they will try to become profitable
rich_sasha
3 days ago
Without putting my weight behind them, here's some counterarguments:
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
Zagreus2142
2 days ago
Thanks! I think these are strong points, especially about the reaction to deepseek. I did have an assumption I didn't put in my original message, that they would probably be making investment offers to founders who walked into this with something like deepseek and that would balloon the costs well beyond office space and engineer time. But even having advanced knowledge of a next big idea from this would be worth the cost of entry yep.
ozgung
3 days ago
I don't think it's about money, they don't invest anything. They gather data about "technical talent" working on AI related ideas. They will connect with 15 of these people to see if they can build it together.
LordDragonfang
3 days ago
It seems almost like... an internship program for would-be AI founders?
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
tern
3 days ago
It's possible that a single senior employee just wanted to do this and it doesn't cost that much and their manager was like "sure"
albingroen
3 days ago
I really do want this to be the case
haute_cuisine
3 days ago
Softbank or Microsoft can’t be happy or sad. CEOs only care about the share price going up while they’re holding the wheel. If Sam wants to start the idea incubator, why would they want to shut it down?
Zagreus2142
2 days ago
My thinking was that both of these large investors specifically want openAI to produce something like agi or failing that, something so popular and useful they make enough money not to care. And they want results this year/early next year. Softbank's latest investment round is partially tied up in openAI resolving their non-profit status by the end of this year. Training random founding engineers with no expectations of even using GPT-5 instead of traditional hiring feels either like a lack of focus or niave during this critical juncture.
But having said that, I do see the wisdom in the comments that the costs in running a 5 week course/workshop are low and the value in having a view into what people are making outside of the openAI bubble is a decent return all its own.
user
3 days ago
xpe
3 days ago
> I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this
Imagining one negative spin doesn’t an imagination make. Imagine harder.
drexlspivey
3 days ago
OpenAI definitely doesn’t have more money than it can invest. They burn cash like crazy that’s why they keep raising money every 6 months.
user
3 days ago
ark296
2 days ago
Sama liked YC some much that he wanted a mini-YC in OpenAI, maybe.
ks2048
3 days ago
> OpenAI has more money than it can invest productively
I don't think there is any money given, except travel costs for first and last week.
user
3 days ago
spott
3 days ago
I mean, how much money are they throwing at this? I doubt it approaches anything close to a percent of the cash they have on hand.