empiko
18 hours ago
Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).
Lichtso
9 hours ago
> Why bother developing chatbots
Maybe it is the reverse? It is not them offering a product, it is the users offering their interaction data. Data which might be harvested for further training of the real deal, which is not the product. Think about it: They (companies like OpenAI) have created a broad and diverse user base which without a second thought feeds them with up-to-date info about everything happening in the world, down to the individual life and even their inner thoughts. No one in the history of mankind ever had such a holistic view, almost gods eye. That is certainly something a super intelligence would be interested in. They may have achieved it already and we are seeing one of its strategies playing out. Not saying they have, but this observation would not be incompatible or indicate they haven't.
blibble
7 hours ago
> No one in the history of mankind ever had such a holistic view, almost gods eye.
I distinctly remember search engines 30 years ago having a "live searches" page (with optional "include adult searches" mode)
ysofunny
7 hours ago
that possibility makes me feel weird about paying a subscription... they should pay me!
or the best models should be free to use. if it's free to use then I think I can live with it
grafmax
7 hours ago
> it is supposed to usher the humanity into a new prosperous age (somehow).
More like usher in climate catastrophe way ahead of schedule. AI-driven data center build outs are a major source of new energy use, and this trend is only intensifying. Dangerously irresponsible marketing cloaks the impact of these companies on our future.
Redoubts
4 hours ago
Incredibly bizarre take. You can build more capacity without frying the planet. Many ai companies are directly investing in nuclear plants for this reason, for example.
imiric
15 hours ago
Related to your point: if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?
This is the main point that proves to me that these companies are mostly selling us snake oil. Yes, there is a great deal of utility from even the current technology. It can detect patterns in data that no human could; that alone can be revolutionary in some fields. It can generate data that mimics anything humans have produced, and certain permutations of that can be insightful. It can produce fascinating images, audio, and video. Some of these capabilities raise safety concerns, particularly in the wrong hands, and important questions that society needs to address. These hurdles are surmountable, but they require focusing on the reality of what these tools can do, instead of on whatever a group of serial tech entrepreneurs looking for the next cashout opportunity tell us they can do.
The constant anthropomorphization of this technology is dishonest at best, and harmful and dangerous at worst.
xoralkindi
12 hours ago
> It can generate data that mimics anything humans have produced...
No, it can generate data that mimics anything humans have put on the WWW
nradov
8 hours ago
The frontier model developers have licensed access to a huge volume of training data which isn't available on the public WWW.
ozim
13 hours ago
anthropomorphization definitely sucks, hype is over the board.
But it is far from snake oil as it actually is useful and does a lot of stuff really.
deadbabe
14 hours ago
Data from the future is tunneling into the past to mess up our weights and ensure we never achieve AGI.
richk449
12 hours ago
> if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?
Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.
As far as I can tell smart engineers are using AI tools, particularly people doing coding, but even non-coding roles.
The criticism feels about three years out of date.
imiric
12 hours ago
Not at all. The reason it's not talked about as much these days is because the prevailing way to work around it is by using "agents". I.e. by continuously prompting the LLM in a loop until it happens to generate the correct response. This brute force approach is hardly a solution, especially in fields that don't have a quick way of verifying the output. In programming, trying to compile the code can catch many (but definitely not all) issues. In other science and humanities fields this is just not possible, and verifying the output is much more labor intensive.
The other reason is because the primary focus of the last 3 years has been scaling the data and hardware up, with a bunch of (much needed) engineering around it. This has produced better results, but it can't sustain the AGI promises for much longer. The industry can only survive on shiny value added services and smoke and mirrors for so long.
majormajor
9 hours ago
> In other science and humanities fields this is just not possible, and verifying the output is much more labor intensive.
Even just in industry, I think data functions at companies will have a dicey future.
I haven't seen many places where there's scientific peer review - or even software-engineering-level code-review - of findings from data science teams. If the data scientist team says "we should go after this demographic" and it sounds plausible, it usually gets implemented.
So if the ability to validate was already missing even pre-LLM, what hope is there for validation of the LLM-powered replacement. And so what hope is there of the person doing the non-LLM-version of keeping their job (at least until several quarters later when the strategy either proves itself out or doesn't.)
How many other departments are there where the same lack of rigor already exists? Marketing, sales, HR... yeesh.
natebc
11 hours ago
> Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.
Last week I had Claude and ChatGPT both tell me different non-existent options to migrate a virtual machine from vmware to hyperv.
Week before that one of them (don't remember which, honestly) gave me non existent options for fio.
Both of these are things that the first party documentation or man page has correct but i was being lazy and was trying to save time or be more efficient like these things are supposed to do for us. Not so much.
Hallucinations are still a problem.
nunez
9 hours ago
The few times I've used Google to search for something (Kagi is amazing!), it's Gemini Assistant at the top fabricated something insanely wrong.
A few days ago, I asked free ChatGPT to tell me the head brewer of a small brewery in Corpus Christi. It told me that the brewery didn't exist, which it did, because we were going there in a few minutes, but after re-prompting it, it gave me some phone number that it found in a business filing. (ChatGPT has been using web search for RAG for some time now.)
Hallucinations are still a massive problem IMO.
catlover76
9 hours ago
[dead]
user
12 hours ago
taormina
9 hours ago
ChatGPT constantly hallucinates. At least once per conversation I attempt to happen with it. We all gave up on bitching about it constantly because we would never talk about anything else, but I have no reason to believe that any LLM has vaguely solved this problem.
HexDecOctBin
6 hours ago
I just tried asking ChatGPT on how to "force PhotoSync to not upload images to a B2 bucket that are already uploaded previously", and all it could do is hallucinate options that don't exist and webpages that are irrelevant. This is with the latest model and all the reasoning and researching applied, and across multiple messages in multiple chats. So no, hallucination is still a huge problem.
majormajor
9 hours ago
> Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.
Nonsense, there is a TON of discussion around how the standard workflow is "have Cursor-or-whatever check the linter and try to run the tests and keep iterating until it gets it right" that is nothing but "work around hallucinations." Functions that don't exist. Lines that don't do what the code would've required them to do. Etc. And yet I still hit cases weekly-at-least, when trying to use these "agents" to do more complex things, where it talks itself into a circle and can't figure it out.
What are you trying to get these things to do, and how are you validating that there are no hallucinations? You hardly ever "hear about it" but ... do you see it? How deeply are you checking for it?
(It's also just old news - a new hallucination is less newsworthy now, we are all so used to it.)
Of course, the internet is full of people claiming that they are using the same tools I am but with multiple factors higher output. Yet I wonder... if this is the case, where is the acceleration in improvement in quality in any of the open source software I use daily? Or where are the new 10x-AI-agent-produced replacements? (Or the closed-source products, for that matter - but there it's harder to track the actual code.) Or is everyone who's doing less-technical, less-intricate work just getting themselves hyped into a tizzy about getting faster generation of basic boilerplate for languages they hadn't personally mastered before?
amlib
8 hours ago
How can it not be hallucinating anymore if everything the current crop of generative AI algorithm does IS hallucination? What actually happens is that sometimes the hallucinated output is "right", or more precisely, coherent with the user mental model.
kevinventullo
6 hours ago
You don’t hear about it anymore because it’s not worth talking about anymore. Everyone implicitly understands they are liable to make up nonsense.
leptons
12 hours ago
Are you hallucinating?? "AI" is still constantly hallucinating. It still writes pointless code that does nothing towards anything I need it to do, a lot more often than is acceptable.
pu_pe
18 hours ago
I don't think it's as simple as that. Chatbots can be used to harvest data, and sales are still important before and after you achieve AGI.
worldsayshi
15 hours ago
It could also be the case that they think that AGI could arrive at any moment but it's very uncertain when and only so many people can work on it simultaneously. So they spread out investments to also cover low uncertainty areas.
energy123
15 hours ago
Besides, there is Sutskever's SSI which is avoiding customers.
timy2shoes
11 hours ago
Of course they are. Why would you want revenue? If you show revenue, people will ask 'HOW MUCH?' and it will never be enough. The company that was the 100xer, the 1000xer is suddenly the 2x dog. But if you have NO revenue, you can say you're pre-revenue! You're a potential pure play... It's not about how much you earn, it's about how much you're worth. And who is worth the most? Companies that lose money!
pests
13 hours ago
OpenAI considers money to be useless post-agi. They’ve even made statements that any investments are basically donations once agi is achieved
bluGill
14 hours ago
The people who make the money in gold rushes sold shovels, not mined the gold. Sure some random people found gold and made a lot of money, but many others didn't strike it rich.
As such even if there is a lot of money AI will make, it can still be the right decision to sell tools to others who will figure out how to use it. And of course if it turns out another pointless fad with no real value you still make money. (I'd predict the answer is in between - we are not going to get some AGI that takes over the world, but there will be niches where it is a big help and those niches will be worth selling tools into)
convolvatron
11 hours ago
its so good that people seem to automatically exclude the middle. its either the arrival of the singularity or complete fakery. I think you've expressed the most likely outcome by far - that there will be some really interesting tools and use cases, and some things will be changed forever - but very unlikely that _everything_ will
rvz
18 hours ago
Exactly. For example, Microsoft was building data centers all over the world since "AGI" was "around the corner" according to them.
Now they are cancelling those plans. For them "AGI" was cancelled.
OpenAI claims to be closer and closer to "AGI" as more top scientists left or are getting poached by other labs that are behind.
So why would you leave if the promise of achieving "AGI" was going to produce "$100B dollars of profits" as per OpenAI's and Microsoft's definition in their deal?
Their actions tell more than any of their statements or claims.
cm277
18 hours ago
Yes, this. Microsoft has other businesses that can make a lot of money (regular Azure) and tons of cash flow. The fact that they are pulling back from the market leader (OpenAI) whom they mostly owned should be all the negative signal people need: AGI is not close and there is no real moat even for OpenAI.
whynotminot
15 hours ago
Well, there’s clauses in their relationship with OpenAI that sever the relationship when AGI is reached. So it’s actually not in Microsoft’s interests for OpenAI to get there
PessimalDecimal
15 hours ago
I haven't heard of this. Can you provide a reference? I'd love to see how they even define AGI crisply enough for a contract.
diggan
15 hours ago
> I'd love to see how they even define AGI crisply enough for a contract.
Seems to be about this:
> As per the current terms, when OpenAI creates AGI - defined as a "highly autonomous system that outperforms humans at most economically valuable work" - Microsoft's access to such a technology would be void.
https://www.reuters.com/technology/openai-seeks-unlock-inves...
computerphage
15 hours ago
Wait, aren't they cancelling leases on non-ai data centers that aren't under Microsoft's control, while spending much more money to build new AI focused data centers that that own? Do you have a source that says they're canceling their own data centers?
PessimalDecimal
15 hours ago
https://www.datacenterfrontier.com/hyperscale/article/552705... might fit the bill of what you are looking for.
Microsoft itself hasn't said they're doing this because of oversupply in infrastructure for it's AI offerings, but they very likely wouldn't say that publicly even if that's the reason.
computerphage
15 hours ago
Thank you!
zaphirplane
18 hours ago
I’m not commenting on the whole just the rhetorical question of why would people leave.
They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI
Game_Ender
17 hours ago
I think the implicit take is that if your company hits AGI your equity package will do something like 10x-100x even if the company is already big. The only other way to do that is join a startup early enough to ride its growth wave.
Another way to say it is that people think it’s much more likely for each decent LLM startup grow really strongly first several years then plateau vs. then for their current established player to hit hyper growth because of AGI.
leoc
16 hours ago
A catch here is that individual workers may have priorities which are altered due to the strong natural preference for assuring financial independence. Even if you were a hot AI researcher who felt (and this is just a hypothetical) that your company was the clear industry leader and had, say, a 75% chance of soon achieving something AGI-adjacent and enabling massive productivity gains, you might still (and quite reasonably) prefer to leave if that was what it took to make absolutely sure of getting of your private-income screw-you money (and/or private-investor seed capital). Again this is just a hypothetical: I have no special insight, and FWIW my gut instinct is that the job-hoppers are in fact mostly quite cynical about the near-term prospects for "AGI".
sdenton4
11 hours ago
Additionally, if you've already got vested stock in Company A from your time working there, jumping ship to Company B (with higher pay and a stock package) is actually a diversification. You can win whichever ship pulls in first.
The 'no one jumps ship if agi is close' assumption is really weak, and seemingly completely unsupported in TFA...
andrew_lettuce
12 hours ago
You're right, but the narrative out of these companies directly refutes this position. They're explicitly saying that 1. AGI changes everything, 2. It's just around the corner, 3. They're completely dedicated to achieving it; nothing is more important.
Then they leave for more money.
sdenton4
11 hours ago
Don't conflate labor's perspective with capital's started position... The companies aren't leaving the companies, the workers are leaving the companies.
Touche
18 hours ago
Yeah I agree, this idea that people won't change jobs if they are on the verge of a breakthrough reads like a silicon valley fantasy where you can underpay people by selling them on vision or something. "Make ME rich, but we'll give you a footnote on the Wikipedia page"
LtWorf
13 hours ago
I think you're being very optimistic with the footnote.
rvz
18 hours ago
> They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI
Of course, but that's part of my whole point.
Such statements and targets about how close we are to "AGI" has only become nothing but false promises and using AGI as the prime excuse to continue raising more money.
tuatoru
12 hours ago
> Their actions tell more than any of their statements or claims.
At Microsoft, "AI" is spelled "H1-B".
redhale
16 hours ago
> Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?
To fund yourself while building AGI? To hedge risk that AGI takes longer? Not saying you're wrong, just saying that even if they did believe it, this behavior could be justified.
krainboltgreene
13 hours ago
There is no chat bot so feature rich that it would fund the billions being burned on a monthly basis.
delusional
18 hours ago
Continuing in the same vain. Why would they force their super valuable, highly desirable, profit maximizing chat-bots down your throat?
Observations of reality is more consistent with company FOMO than with actual usefulness.
Touche
18 hours ago
Because it's valuable training data. Like how having Google Maps on everyone's phone made their map data better.
Personally I think AGI is ill-defined and won't happen as a new model release. Instead the thing to look for is how LLMs are being used in AI research and there are some advances happening there.
richk449
12 hours ago
> If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?
What if chatbots and user interactions ARE the path to AGI? Two reasons they could be: (1) Reinforcement learning in AI has proven to be very powerful. Humans get to GI through learning too - they aren’t born with much intelligence. Interactions between AI and humans may be the fastest way to get to AGI. (2) The classic Silicon Valley startup model is to push to customers as soon as possible (MVP). You don’t develop the perfect solution in isolation, and then deploy it once it is polished. You get users to try it and give feedback as soon as you have something they can try.
I don’t have any special insight into AI or AGI, but I don’t think OpenAI selling useful and profitable products is proof that there won’t be AI.
user
12 hours ago