asielen
12 hours ago
I don't trust most CEOs perspectives on AI at all, they are far too removed from the actual work to know what AI can and can't do.
When I hear a CEO say this, what I hear is that they are going to use AI as an excuse to do massive layoffs to juice stock price and then cash out before the house of cards comes tumbling down. Every public company CEOs dream. The GE model in the age of AI.
Will AI drastically reshape industries and careers? Absolutely. Do currently CEOs understand or even care how (outside of making them richer in the next few quarters)? No.
CEOs are just marketing to investors with ridiculous claims because their products have stagnated. (See Benioffs recent claim that 50% of work at Salesforce is AI. Maybe that is why it sucks so much)
jmathai
6 hours ago
Doesn’t really matter why you lost your job though, does it? Especially when the job loss is wide spread.
dpoloncsak
3 hours ago
The argument, I think, is that if AI cannot actually replace these jobs, either other companies will pop up to fill the holes, or they will quickly reverse their position once negative results start coming in.
Sure, you can have all of SalesForce run entirely by AI, but people may just find a better solution that actually works. Claude ran a vending machine after all, but it was deemed a failure.
So yeah, maybe there's a rocky month or two, and I'm not trying to downplay the severity of that...but the demand for the roles these services fulfill will still exist, and become market opportunities
jmathai
2 hours ago
I think it’s more likely that AI will make everyone in a department 25% more efficient. This means a department of 5 people will become a department of 4 people.
AI will have taken one person’s job.
janalsncm
12 hours ago
Well, that means they will continue to do it until it starts hurting their bottom line. Targets missed, sales down, etc.
happymellon
11 hours ago
And then the ~~outsourcing~~ AI replacement may slow down or reverse.
Its happened before, it'll happen again, and ~~Visual Basic~~ AI may or may not change the landscape. I'm not that impressed with the current guise, but after a few revisions it may be better.
impossiblefork
11 hours ago
I actually thought LLMs worked well (and I do a lot of LLM work) until a couple of days ago when I started trying to do some weird things and ended up in hallucination land, and it didn't matter what model I used.
Literally everything hallucinated even basic things, like what named parameters a function had etc.
It made me think that the core of the benefit of LLMs is that, even though they may not be smart, at least they've read what they need to answer the question you have, but if they haven't-- if there isn't much data on the software framework, not very many examples, etc., then nothing matters, and you can't really feed in the whole of vLLM. You actually need the companies running the AI to come up with training exercises for it, train on the code, train it on answering questions about the code, ask it to write up different simple variations of things in the code, etc.
So you really need to use LLMs to see the limits, and use them on 'weird' stuff, frameworks no one imagines that anyone will mess with. Even being a researcher and fiddling with improving LLMs every day may not be enough to see their limits, because they come very suddenly and then any accuracy or relevance goes away completely.