some-guy
5 days ago
These claims wouldn't matter if the topic weren't so deadly serious. Tech leaders everywhere are buying into the FOMO, convinced their competitors are getting massive gains they're missing out on. This drives them to rebrand as AI-First companies, justify layoffs with newfound productivity narratives, and lowball developer salaries under the assumption that AI has fundamentally changed the value equation.
This is my biggest problem right now. The types of problems I'm trying to solve at work require careful planning and execution, and AI has not been helpful for it in the slightest. My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company". The mass hysteria among SVPs and PMs is absolutely insane right now, I've never seen anything like it.
culopatin
4 days ago
Today a friend of mine connected me with his uncle who wanted to develop an MVP for his company. He claimed he didn’t want to distract his engineers on this project, but that “it shouldn’t take you more than 5hs to do this with vibe coding”. I promptly declined, if it takes 5hs why are you reaching out to me? It would take more than 5hs just to bring me into the loop of what you want vs your own engineers. If vibe coding is that good might as well DIY while you’re showering!
noduerme
4 days ago
These kinds of "clients" have always been around. If they want to tell me how long they think it should take, my answer has always been that they should do it themselves. If they say something like, "it shouldn't take too long," I said "send me what you need it to do." Then I look at that and ask "what if A, B or C happens? What if a user does X?" Make a list about 15 bullet points long of edge cases they hadn't thought of showing, the flaws in their business logic. (This, btw, is what I'd do if I were instructing an LLM as well). Then it's, "well, how long will that take?"
And my answer - like the best of car mechanics who work on custom rides - is: I don't know how long until I actually get in there, but minimum 5x what you think, and my rate is $300/hr. Is it worth it to you to do it right?
Usually the answer is no. When it's yes, I have a free hand. And having a few clients who pay well is worth a lot more than having a few dozen who think they know everything and are too cheap to pay for it anyway.
Wowfunhappy
4 days ago
I have fully vibe coded several apps at this point (none professionally, but stuff I actively use in my life). One thing I don't think everyone understands is that it still takes time. I need to do countless rounds of testing, describing the exact problem I find, rinse and repeat again and again for hours and hours. I happen to enjoy this way of working and I do think it's faster than writing the code myself, but it's not fast.
patapong
4 days ago
Yep agreed... It's a bit like sifting gold, most of what it produces is crap, but if you stay with it long enough and put in enough effort you will eventually have something that works well.
Part of the issue is that it is so fast to get to a working prototype that it feels like you are almost there, but there is a lot of effort to go.
yetihehe
4 days ago
> that it feels like you are almost there, but there is a lot of effort to go.
People made tools specifically to make templates look like templates and not a finished product, just because client was ready to push the product just after seeing real looking mockups of interface.
Sharlin
4 days ago
Us devs always (justifiably) complain how the PHBs of the world go all "looks great, now ship it" when presented a minimal prototype hacked together in a week, missing 95% of all the time-consuming parts. Apparently it’s easy to fall into the same trap even when you should know better…
some-guy
4 days ago
Not only that, but the requirements are far more lax when it's your own project. In an enterprise setting on a 14-year old codebase (where this 80%-reduced-timeframe project lies), vibe coding doesn't work at all! PMs and managers simply do not understand the nuances of these tools.
dangus
4 days ago
I find that the main way it saves time is in situations where yes I know how to write code but I am looking to write something in a domain I’m less familiar with.
And regex.
ta12653421
3 days ago
++1
duxup
4 days ago
If it's 5hs, why not distract his own engineers?
That's pretty much nothing ... to me that line indicates a whole lot of other possible things.
materielle
4 days ago
My opinion is that AI isn’t actually the root of the problem here.
It’s that we are heading towards a big recession.
As in all recessions, people come up with all sorts of reasons why everything is fine until it can’t be denied anymore. This time, AI was a useful narrative to have lying around.
osigurdson
4 days ago
I think a kind of AI complacency has set in. Companies are just in chill mode right now, laying off people here and there while waiting for AI to get good enough to do actual work.
baselessness
4 days ago
Everyone is bracing for a labor supply shock. It will move in the direction opposite what investors expect.
2030 will be 2020 all over again.
denkmoon
4 days ago
Why?
rsynnott
4 days ago
If (a) companies lay too many people off because the magic robots will make engineers unnecessary and (b) the pipeline collapses, because being a software engineer is an undesirable career because it is being replaced by robots and (c) it emerges that the robots are kinda bullshit, then there's going to be one hell of a shortage.
When I started a CS degree in 2003, we were still kinda in the "dot com crash has happened, no-one will ever hire a programmer again" phase, and there were about 50 people in my starting class. I think in the same course two years ago, there'd been about 200. The 'correct' number, for actual future demand, was certainly closer to 200 than 50, and the industry as a whole had a bit of a labour crunch in the early 10s in particular.
InsideOutSanta
4 days ago
I believe we are vastly underestimating the number of programmers needed, as some companies reap unusually high rewards from hiring programmers. Companies like Google can pay huge sums of money to programmers because they make even higher sums of money from the programmer's work.
This means that they inflate programmer salaries, which makes it impossible for most companies that could benefit from software development to hire developers.
We could probably have five times as many software developers as we have now, and they would not be out of work; they would only decrease average salaries for programmers.
phatskat
3 days ago
But if only Google or similarly sized companies can pay that well, and there’s tons of programmers, obviously the average salary will balance out lower than what Google pay but will still be competitive to the thousands of programmers who didn’t get hired at Google.
roncesvalles
a day ago
>but will still be competitive to the thousands of programmers who didn’t get hired at Google
Why would this be the case? Many programmers join Google or Meta (or similar tier companies) and immediately double or triple their income. Software salaries are famously bimodal and people often transition from the lower mode to the higher mode practically overnight.
In fact (and I'm not an economist) I conjecture that the lower mode exists because the upper mode exists. That is, people purposefully don't really care what their salary is (i.e. don't put upward wage pressure) when they're at lower-mode companies because they know one day they'll make the leap to the upper-mode. In other words, the fact that Google-tier companies pay well allows other companies to pay poorly because those guys are just padding their resumes to get a 350k job at Google and don't really care whether Bank of Nowhere pays them $90k or $110k.
camdenreslink
4 days ago
If a company could benefit from software developers but can’t afford them, then they can purchase Saas offerings written by companies that can afford developers. I don’t think we’ve run out of opportunities to improve the business world with software quite yet.
InsideOutSanta
4 days ago
The fact that there is a market for these products, but they are almost universally terrible, supports my point.
osigurdson
4 days ago
I think it might be worse that that as staff reductions are across the board, not just in software development roles. My hope is start up creation will be unprecedented to take advantage of the complacency. They will wonder why AI deleted their customers when they thought it was supposed to delete their employees.
DrillShopper
4 days ago
Holding on for that sweet sweet pay bump after the coming AI winter
ModernMech
4 days ago
Combine a bunch of factors:
1) fewer students are studying computer science, I'm faculty at a top CS program and we saw our enrollment decline for the first time ever. Other universities are seeing similar slowdowns of enrollment [1]
2) fewer immigrants coming to the united states to work and live, US is perhaps looking at its first population decline ever [2]
3) Current juniors are being stunted by AI, they will not develop the necessary skills to become seniors.
4) Seniors retiring faster because they don't want to have to deal with this AI crap, taking their knowledge with them.
So we're looking at a negative bubble forming in the software engineering expertise pipeline. The money people are hoping that AI can become proficient enough to fill that space before before everything bursts. Engineers, per usual, are pointing out the problem before it becomes one and no one is listening.
[1]: https://www.theatlantic.com/economy/archive/2025/06/computer...
[2]: https://nypost.com/2025/09/03/us-news/us-population-could-sh...
roncesvalles
a day ago
1. OBBB rolled back the R&D deduction changes in Section 174 that (allegedly) triggered the layoffs and froze up hiring in 2022-2023.
2. It looks like rates will keep going down.
3. Fewer people are going into CS due to the AI hysteria. You might say oh there's a 4 year lag, but not quite. We should see an immediate impact from career changers, CS grads choosing between career and professional school, and those switching out of CS careers.
The tech AI fear hysteria is so widespread that I've even heard of people avoiding non-SWE tech careers like PM.
aprilthird2021
4 days ago
First thing I thought of was Benioff saying he cut thousands of customer support roles because AI can do it better then turning around and giving lackluster earnings report with revised down guidance and the stock tanks
some-guy
4 days ago
I have never, ever seen SVPs, CEOs, and PMs completely misunderstand a technology before. And I agree with you, I think it's more of an excuse to trim fat--actual productivity is unlikely to go up (it hasn't at our Fortune 500 company)
osigurdson
4 days ago
>> productivity is unlikely to go up
I wonder how that would even be measured? I suppose you could do it for roles that do the same type of work every day. I.e. perhaps there is some statistical relevance to number of calls taken in a call center per day or something like that. One the software development side however, productivity metrics are very hard to quantify. Of course, you can make a dashboard look however you want, but impossible, essentially to tie those metrics to NPV.
isodev
4 days ago
> we are heading towards a big recession
Who is we? One country heading into a recession is hardly enough to nudge the trend of "all things code"
viridian
4 days ago
The last US recession that didn't also pull in the rest of the western world was in 1982, over 40 years ago. Western Europe, Aus, NZ, Canada, and the US all largely rise and sink on the same tides, with differences measured in degrees.
danaris
4 days ago
Enough of the tech industry is America-based that a US recession is enough to do much more than nudge the trend of "all things code". Much as I would prefer that it were not so.
colechristensen
4 days ago
America's recessions are global recessions.
brianmcc
4 days ago
Sadly yes - "When America sneezes, the World catches cold"
dragonwriter
3 days ago
If that “one country” is the US and not, say, Burkina Faso, it is a major impact on financing, and software has an unusually high share of positions dependent on speculative investment for future return rather than directly related to current operations.
rootusrootus
5 days ago
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate
That's insane. Who the hell pulls a number out of their ass and declares it the new reality? When it doesn't happen, he'll pin the blame on you, but everyone else above will pin the blame on him. He's the one who will get fired.
Laying off unnecessary developers is the answer if LLMs turn out to make us all so much more productive (assuming we don't just increase the amount of software written instead). But that happens after successful implementation of LLMs into the development process, not in advance.
Starting to think I should do the inadvisable and move my investments far far away from the S&P 500 and into something that will survive the hype crash that can't be too far off now.
torginus
4 days ago
The whole 'startup/scaleup' culture (which have become the industry titans of today) - is insane and have been insane for as long as it has been a thing - the culture of either 'just grow and figure out how to monetize'(Social media, food delivery etc.) or 'we're selling you this technology that doesn't exist yet but going to be insanely valuable in the future (AGI, self driving)' or 'we're selling shovels to the first 2' (cloud providers, Nvidia) has been insane.
I'd argue that compared to a decade. 15 years ago, relatively little value has been created. If you sat down in front of a 15 yo computer, or tried to solve a technical challenge with the tooling of 15-10 years ago, I don't think you'd get a significantly worse result.
Yet in this time the US has doubled its GDP, most of it owning to the top, to the tech professionals and financiers who benefited from this.
And some of this money went into assets with constrained supplies, such as housing, marking up the prices adjusted for inflation, making average people that much worse off.
While I do feel society is making progress, it's been a slow and steady march. Which in relative terms means slowing. Of I gave you $10 every week, by week 2, you'd double your wealth, by the end of the year, you almost didn't notice the increase.
Technology accumulation is the same, I'd argue it's even worse, since building on top of existing stuff has a cost proportional to the complexity for a fixed unit of gain (and features get proportionally less valuable as you implement the most important ones first).
Sorry got distracted from my main point - what happens when people stop believing that these improvements are meanigful or that technology that was priced in to produce 100x the value will come at all (and more importantly the company you're invested in will be able to caputre it)?
dangus
4 days ago
> If you sat down in front of a 15 yo computer, or tried to solve a technical challenge with the tooling of 15-10 years ago, I don't think you'd get a significantly worse result.
While you have decent points in your comment (essentially, the idea of tech industry growth slowing due to low hanging fruit being picked), if this statement going to be your barometer you’re going to end up looking stupendously wrong.
You can sit your Grandma down at her computer and have her type in “please make me a website for my sewing club that includes a sign up form and has a pink background” and AI will just do it and it’ll probably even work the first time you run it.
15 years ago tossing a website on Heroku was a pretty new concept, and you definitely had to write it all on your own.
10 years ago Kubernetes had its initial release.
Google Drive and Slack are not even 15 years old.
TensorFlow is just hitting its 10th birthday.
I think you’re vastly underestimating the last 15 years of technological progress and in turn value creation.
ndriscoll
4 days ago
I just tried it and no, you can't. I don't really even see how you could. Where is the form supposed to go? Is Grandma supposed to rent a server somewhere to do processing? Where?
25 years ago we had wysiwyg editors to build web pages, and she could just include a link to email her to have people ask her to "sign up" (the entire point of a sewing club is to socialize. You don't need an automated newsletter or whatever. That's for people that want to sell you something). You'd put it into the space your ISP gave you, or Windows actually included a personal web server. You had to be somewhat technical to use the Windows server, but it could've been made more friendly instead of being removed, and personal computers should make it easy to share a static website.
We've clearly regressed since then. People think you need an engineer or AI to build basic web pages. My actual grandma was doing some of this stuff when I was a kid in the 90s, as was I.
What happened to the web seems analogous to me to as if we abandoned word processors and spreadsheets and said you need a six-figure engineer to do custom programming any time you need the features of one of those things.
ansgri
4 days ago
100% this. At that time if one needed a simple custom database client with GUI and analytical reports, the go-to solution was MS Access and it was working well. Could even be placed on a shared drive. Now, as accepted solution, you either pay for different SaaSes, each with a fraction of functionality, or have to code a bespoke website using overcomplicated frameworks.
dangus
3 days ago
MS access still exists and is still maintained and offered for sale.
You should not look back on those solutions and neglect to acknowledge their limitations. Access can only handle small databases (maximum 2GB) and about 100 simultaneous connections.
Access is basically a product of the technological limitations of its time. It isn’t designed the way it is because that way is the best design.
The kind of business that relied on Access is going to find that solutions like Airtable are far easier and more powerful, and certainly less prone to data loss than mom and pop’s shared drive. There are even open source self-hosted alternatives like NocoDB and Baserow (or cloud hosted).
You’ll inevitably complain about the subscription cost of SaaS solutions but it’s not like MS Office business licenses were ever cheap.
dangus
3 days ago
The thing is that you and grandma were only accomplishing static sites back then. You can still do that today with many more options than you had back then and you can even use many of the exact same methods.
But when it comes to actually accomplishing something with application logic on the web, Grandma could ask the AI agent literally all these questions and get solid answers. At any point you get confused you can just ask what to do. You can highlight a line of code and ask “what does this mean?” You can even say “I want this web page to be put online what do I do?”
Beats waiting for a human to answer you.
You’re also taking my example application way too literally. No I don’t know why Grandma needs a signup form, I just couldn’t think of a web app off the top of my head.
MS Access and WSYWIG tools like FrontPage and iWeb were not good. I know because I was there and I used FrontPage at work. A link to your email (get spammed forever) is not a replacement for an application or email form. The whole reason code is preferred over WYSIWYG is because you inevitably have issues with change management even for simple personal projects, which is why static site generators have gained popularity. I’m sure your grandma could have handled markdown + someone else’s Hugo template + Netlify.
Hell, if we want to talk about progress in WYSIWYG editors, Trix greatly improved and modernized the shortfalls of that process and that was launched in 2014, less than 10 years ago. So even in the world of WSYWIG we have better tools now than before.
IIS has not been removed from Windows home edition, by the way.
dep_b
4 days ago
25 years ago I uploaded zip files with my PHP update to overwrite my older site, and possibly I had to do a MySQL migration before I could go live again. And when I don't have to care about anybody else's professional standards, that's still how I make simple websites.
Apart from doing the styles and layout, I don't think current tools have less friction. They're a lot safer though. Can't say I never dropped a production database.
ViewTrick1002
4 days ago
Kind of missing the existence of Wordpress? A random theme + a "form builder" plugin and it would be done with hosting taken care off wherever you created the site.
dangus
3 days ago
I really just didn’t bring it up because it is older than 15 years old and just spans the evolution of the web.
I definitely agree that it’s a reasonable choice.
insane_dreamer
4 days ago
> vastly underestimating the last 15 years of technological progress
based on your examples, I'd say you're vastly overestimating
sure, those are all new techs, but it's not like you couldn't do those things before in different ways or they are revolutionary in any way
ModernMech
4 days ago
Their point was that we don't need things like Heroku, Kubernetes, Slack, TensorFlow, etc. They're not creating value, they're propping up a tech stack, whose value is questionable, given the amount we've invested in it. It seems that over the past 15 years of tech, the end result is that a few companies and people became fabulously wealthy, and the rest of us are pretty much worse off. Tech isn't transforming our lives or society the way tech companies promised it would 15 years ago.
As for grandma, 15 years ago she could have just posted her sewing club on Facbook, she doesn't need Heroku or AI.
dangus
3 days ago
I think you’re willfully ignorant if you’re going to claim that Slack and Kubernetes haven’t created any value. And of course these are just random examples that I happened to think of.
I would say that these technologies/products being so wildly popular puts burden of proof on you to show me some kind of evidence that these technologies aren’t productive. Like are you trying to say that something had better deployment velocity and reliablity than Kubernetes at handling high-complexity application infrastructure deployments for large enterprise companies? What was it? Why was it better than Kubernetes?
The analogy is that you’re basically saying that zippers aren’t really better than buttons but then literally everyone is overwhelmingly wearing pants and coats with zippers and very strongly prefer zippers. So really it’s on you to prove to me that I should be using pants with buttons instead.
Finally, there’s a lot of irony in your first paragraph complaining about a few tech oligarchs becoming fabulously wealthy and then suggesting that Grandma just use Facebook instead of building her own site. In any event, my web app example was just a poorly thought out example of a web app, I really just mean a website that has a little more utility than a static site.
ModernMech
3 days ago
> I think you’re willfully ignorant if you’re going to claim that Slack and Kubernetes haven’t created any value.
The juice hasn't been worth the squeeze. You can look at all societal indicators except the stock market pointing downward to get to that conclusion. Nothing is actually better than it was in 2010 despite Uber, Airbnb, Kubernetes, Slack, and all the other SV tech "innovations". People are not happier or wealthier because of the tech coming from Silicon Valley. In general the end result of the last 15 years of tech is that it's made us more neurotic, disconnected, depressed, and angry.
We don't need "better deployment velocity and reliability for high-complexity application infrastructure deployments for large enterprises". Listen to yourself man, you sound like you've been hypnotized by the pointy-haired boss. The tech sector makes false promises about a utopia future, and then it just delivers wealth for shareholders, leaving everyone else worse off.
Grandma especially doesn't need deployment velocity, she's being evicted by her landlord because he wants to turn her flat into an Airbnb. She can't get to the grocery store because the town won't invest in public transport and Uber is the only option. She's been radicalized by Meta and Youtube and now she hates her own trans grandchild because her social media feed keeps her algorithmically outraged. Oh, and now she's getting scammed by AI asking her to invest her life savings in shitcoins and NFTs.
> The analogy is that you’re basically saying that zippers aren’t really better than buttons but then literally everyone is overwhelmingly wearing pants and coats with zippers and very strongly prefer zippers.
I don't agree that the ubiquity and utility are necessarily correlated, so I don't see the zippers and Kubernetes as analogous.
But the proliferation of zippers has more to do with the fact they are easier for manufacturers to integrate into products compared to buttons -- they come pre-measured and installing them is a straight stitch that can be done with a machine, whereas installing buttons is more time-consuming.
Zippers are worse for consumers in many ways, repairability chief among them. But really they are part of a general trend over my lifetime of steadily falling garment quality, as manufacturers race to the bottom.
> In any event, my web app example was just a poorly thought out example of a web app, I really just mean a website that has a little more utility than a static site.
You said it, not me. We had the technology to throw up a static site in 2010 and my grandmother could actually do that with dreamweaver and FTP, and it worked fine.
dangus
20 hours ago
1. You're going off-topic, we weren't talking about societal happiness, we were talking about whether productivity and the ease of developing for the Internet has increased. E.g. if you are trying to claim that Kubernetes doesn't work better for companies than previous solutions wrestling with Chef or Puppet with dependency hell in some VMs I am just going to assume you've never touched a backend and think Terraform is something you do in Sim City 2000.
2. Like it or not, the customers of the tech industry include a lot of large enterprises that do find value in improving velocity and reliability for complex workflows. I am not some kind of corporate sellout for pointing out this plain factual reality.
3. I totally agree with your points about income inequality and happiness metrics among our population but they are not relevant to the topic at hand.
4. If Dreamweaver and FTP is your barometer, recall that Dreamweaver was an expensive paid product. There still are plenty of FTP-based web hosts and you can totally throw up a website with a tool like https://trix-editor.org/, which as I mentioned is a tool that did not exist 15 years ago. You can also just pay for a website service like SquareSpace or Wix (or not, they have free tiers).
The fact that specific individual tools do not exist/are not supported anymore is irrelevant, as there are dozens of often-better tools for throwing a website up that are definitely friendly to novices. Let's not forget the plethora of no-code application development tools that are mostly a recent development.
Mobile users prefer apps 60% over websites. So the real modern barometer is: could your Grandma put up an iPhone app in 2008 following the developer tutorials using Objective-C or would she have a much better time using Swift and/or a no-code app development solution?
apercu
2 days ago
I tried that a few months ago. Just wanted a static site to replace an old WP site. It was a total disappointment. I’m not invested enough to do a write up but I tried 3 LLM models and multiple services and all I got was a bunch of dependencies and technologies that I explicitly said I didn’t want. I would has been better off using one of those build your website services for non technical people.
queenkjuul
4 days ago
I actually had a lot of fun building a native .NET web frontend in VB 2005 recently lol. I thought it kind of amazing that i could just bind UI controls directly to state objects and the UI would automatically React to any changes i made. Felt very natural as a modern web dev lol. Found a lightweight .NET JSON library that was compatible all the way back to VB 2005 as well.
In case you also need to control Spotify from Windows 95 :D
cortesoft
4 days ago
> That's insane. Who the hell pulls a number out of their ass and declares it the new reality?
Product and Sales?
Ekaros
4 days ago
Why don't we cut sales commissions by 30% and expect double the sales now. Surely LLMs will make them that much more effective and they still make more.
iamacyborg
4 days ago
I see you’ve met the senior leadership at my current employer
toomuchtodo
5 days ago
VXUS
Not investing advice; the bottom 490 companies in the S&P500 are nominally flat since 2022 and down against inflation, GPUs and AI hype are holding everything together at the moment.
> In simpler terms, 35% of the US stock market is held up by five or six companies buying GPUs. If NVIDIA's growth story stumbles, it will reverberate through the rest of the Magnificent 7, making them rely on their own AI trade stories.
https://www.wheresyoured.at/the-haters-gui/
> Capex spending for AI contributed more to growth in the U.S. economy in the past two quarters than all of consumer spending, says Neil Dutta, head of economic research at Renaissance Macro Research, citing data from the Bureau of Economic Analysis.
https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-...
> Two Nvidia customers made up 39% of Nvidia’s revenue in its July quarter, the company revealed in a financial filing on Wednesday, raising concerns about the concentration of the chipmaker’s clientele.
https://www.cnbc.com/2025/08/28/nvidias-top-two-mystery-cust...
cmckn
4 days ago
> He's the one who will get fired.
I wouldn’t count on it.
theandrewbailey
4 days ago
He will fire you before he gets fired.
rootusrootus
4 days ago
In some cases, I'm sure it would play that way. But I've been on both sides, and most places I've worked have been more reluctant to fire engineers than managers.
InsideOutSanta
4 days ago
Here's a new keyboard. I've cut all your estimations by five percent; surely you can type much faster with this.
y1n0
5 days ago
> That's insane. Who the hell pulls a number out of their ass and declares it the new reality?
Chatgpt.
klodolph
4 days ago
I think ChatGPT isn’t the “who”, it’s just the ass that people are pulling numbers out of. A big ole extra butt you graft onto your body.
kunley
4 days ago
Them managers have always been pulling a number out of their ass.
Seattle3503
5 days ago
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".
If we can delegate incident response to automated LLMs too, sure, why not. Let the CEO have his way and pay the reputational price. When it doesn't work, we can revert our git repos to the day LLMs didn't write all the code.
I'm only being 90% facetious.
bitwize
4 days ago
The CEO doesn't care. He'll fail upwards. Be able to spin it as increasing shareholder value by cutting costs and bounce before the chickens come home to roost.
vorpalhex
5 days ago
I agree with you and I'm being 0% facetious.
I think making stakeholders have to engage with these models is the most critical point for people having deadlines or expectations based on them.
Let Claude run incident response for a few weeks. I'll gladly pause pagerduty for myself.
rglover
5 days ago
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".
Lord, forgive them, they know not what they do.
leoc
5 days ago
I think Chuck Prince's "As long as the music is playing, you've got to get up and dance. We're still dancing." from the GFC https://www.reuters.com/article/markets/funds/ex-citi-ceo-de... is the more relevant famous line here.
coffeemug
5 days ago
I haven’t heard this before, this is incredible. Thanks for sharing. There were a bunch of phenomena that didn’t quite make sense to me before, which make perfect sense now that I read the quote.
crabmusket
4 days ago
That quote was the inspiration for one of my favourite bits in the Lehman Trilogy, "the twist". There's a glimpse of it in the trailer here https://youtu.be/Lo4VC43h7ts?si=ebl9WwK2NIgW0sHD&t=49
"Bobby Lehman is ninety three years old and he dances the twist. He is 100 years old! 120! Maybe 140! He dances like a madman!"
bsder
5 days ago
Do not forgive them. We already have a description for them:
"A bunch of mindless jerks who'll be the first against the wall when the revolution comes."
o11c
5 days ago
Remember, the origin of that quote explicitly specifies "marketing department".
The thing about hype cycles (including AI) is that the marketing department manages to convince the purchases to do their job for them.
atleastoptimal
5 days ago
I think this hits at the heart of why you and so many people on HN hate AI.
You see yourselves as the disenfranchised proletariats of tech, crusading righteously against AI companies and myopic, trend-chasing managers, resentful of their apparent success at replacing your hard-earned skill with an API call.
It’s an emotional argument, born of tribalism. I’d find it easier to believe many claims on this site that AI is all a big scam and such if it weren’t so obvious that this underlies your very motivated reasoning. It is a big mirage of angst that causes people on here to clamor with perfunctory praise around every blog post claiming that AI companies are unprofitable, AI is useless, etc.
Think about why you believe the things you believe. Are you motivated by reason, or resentment?
nemomarx
5 days ago
Find a way to make sure workers get the value of ai labor instead of bosses and the workers will like it better. If the result is "you do the same work but managers want everything in 20% of the time" why would anyone be happy?
atleastoptimal
5 days ago
I agree that if there are productivity gains that everyone should benefit, but the only thing that would allow this to happen are systems and incentive structures that allow that to happen. A manager's job is to increase revenue and cut costs, that's how they get their job, how they keep their job, and how they are promoted. People very rarely get free benefits outside the range of what the incentive structures they exist in allow them to.
UncleMeat
5 days ago
> I agree that if there are productivity gains that everyone should benefit
And if they don't, then you'd understand the anger surely. You can't say "well obviously everybody should benefit" and then also scold the people who are mad that everybody isn't benefiting.
atleastoptimal
4 days ago
i’m not scolding anyone who is mad that not everyone is benefitting
UncleMeat
4 days ago
What are you doing then?
queenkjuul
4 days ago
I mean you certainly fooled us all in that front
ozgrakkurt
5 days ago
And people don’t like this. Something being logical doesn’t mean people have to accept it.
Also AI has been basically useless every time I tried it except converting some struct definitions across languages or similar tasks, it seems very unlikely that it would boost productivity by more than 10% let alone 400%.
atleastoptimal
5 days ago
What AI coding tools/models have you been using?
spc476
5 days ago
Let me guess ... they're holding it wrong, and the model they're using is older than 20 minutes.
atleastoptimal
5 days ago
You’re assuming how i would respond before i even respond. Please allow inquiries to happen naturally without polluting the thread with meritless cynicism.
jcranmer
4 days ago
With all due respect, with a response like "What AI coding tools/models have you been using?" to a complaint that AI tools just don't seem to be effective, what difference does a reply to that even make? If your experience makes you believe that certain tools are particularly good--or particularly bad--for the tasks at hand, you can just volunteer those specifics.
FWIW, my own experiences with AI have ranged from mediocre to downright abysmal. And, no, I don't know which models the tools were using. I'm rather annoyed that it seems to be impossible to express a negative opinion about the value of AI without having to have a thoroughly documented experiment that inevitably invites the response that obviously some parameter was chosen incorrectly, while the people claiming how good it is get to be all offended when someone asks them to maybe show their work a little bit.
atleastoptimal
4 days ago
Some people complain about AI but are using the free version of ChatGPT. Others are using the best models without a middleman system but still see faults, and I think it’s valuable to inquire about which domains they see no value in AI from. There are too many people saying “I tried AI and it didn’t work at all” without clarifying what models, what tools, what they asked it to do, etc. Without that context it’s hard to gauge the value of any value judgement on AI.
It’s like saying “I drove a car and it was horrible, cars suck” without clarifying what car, the age, the make, how much experience that person had driving, etc. Of course its more difficult to provide specifics than just say it was good or bad, but there is little value in claims that AI is altogether bad when you don’t offer any details about what it is specifically bad at and how.
jcranmer
4 days ago
> It’s like saying “I drove a car and it was horrible, cars suck” without clarifying what car, the age, the make, how much experience that person had driving, etc.
That's an interesting comparison. That kind of statement can be reasonably inferred to be made by someone just learning to drive who doesn't like the experience of driving. And if I were a motorhead trying to convert that person to like driving, my first questions wouldn't be those questions, trying to interrogate them on their exact scenario to invalidate their results, but instead to question what aspect of driving they don't like to see if I could work out a fix for them that would meaningfully change their experience (and not being a motorhead, the only thing I can think of is maybe automatic versus manual transmission).
> there is little value in claims that AI is altogether bad when you don’t offer any details about what it is specifically bad at and how.
Also, do remember that this holds true when you s/bad/good/g.
fragmede
4 days ago
We're still in the early days of LLMs. ChatGPT was only three years ago. The difference it makes is that without details, we don't know if someone's opinion is still relevant, because of how fast things have moved since the original GPT-3 release of ChatGPT. If someone half-assed an attempt to use the tools a year ago, and hasn't touched them since, and is going around still commenting about the number of R's in strawberry, then we can just ignore them and move on because they're just being loudmouths who need everyone else to know they don't like AI. If someone makes an honest attempt, and there's some shortcoming, then that can be noted, and then the next version coming out of the AI companies can be improved.
But if all we have to go on is "I used it and it sucked" or "I used it and it was great", like, okay, good for you?
KronisLV
4 days ago
> With all due respect, with a response like "What AI coding tools/models have you been using?" to a complaint that AI tools just don't seem to be effective, what difference does a reply to that even make?
"Damn, these relational databases really suck, I don't know why anyone would use them, some of the data by my users had emojis in them and it totally it! Furthermore, I have some bits of data that have about 100-200 columns and the database doesn't work well at all, that's horrible!"
In some cases knowing more details could help, for example in the database example a person historically using MySQL 5.5 could have had a pretty bad experience, in which case telling them to use something more recent or PostgreSQL would have been pretty good.
In other cases, they're literally just holding it wrong, for example trying to use a RDBMS for something where a column store would be a bit better.
Replace the DB example with AI, same principles are at play. It is equally annoying to hear people blaming all of the tools when some are clearly better/worse than others, as well as making broad statements that cannot really be proven or disproven with the given information, as it is people always asking for more details. I honestly believe that all of these AI discussions should be had with as much data present as possible - both the bad and good experiences.
> If your experience makes you believe that certain tools are particularly good--or particularly bad--for the tasks at hand, you can just volunteer those specifics.
My personal experience:
* most self-hosted models kind of suck, use cloud ones unless you can get really beefy hardware (e.g. waste a lot of money on them)
* most free models also aren't very good, nor have that much context space
* some paid models also suck, the likes of Mistral (like what they're doing, just not very good at it), or most mini/flash models
* around Gemini 2.5 Pro and Claude Sonnet 4 they start getting somewhat decent, GPT 5 feels a bit slow and like it "thinks" too much
* regardless of what you do, you still have to babysit them a lot of the time, they might take some of the cognitive load off, but won't make you 10x faster usually, the gains might definitely be there from reduced development friction (esp. when starting new work items)
* regardless of what you do, they will still screw up quite a bit, much like a lot of human devs do out there - having a loop of tests will be pretty much mandatory, e.g. scripts that run the test suite and also the compilation
* agentic tools like RooCode feel like they make them less useless, as do good descriptions of what you want to do - references to existing files and patterns etc., normally throwing some developer documentation and ADRs at them should be enough but most places straight up don't have any of that, so feeding in a bunch of code is a must
* expect usage of around 100-200 USD per month for API calls if the rate limits of regular subscriptions are too limiting
Are they worth it? Depends. The more boilerplate and boring bullshit code you have to write, the better they'll do. Go off the beaten path (e.g. not your typical CRUD webapp) and they'll make a mess more often. That said, I still find them useful for the reduced boilerplate, reduced cognitive load, as well as them being able to ingest and process information more quickly than I can - since they have more working memory and the ability to spot patterns when working on a change that impacts 20-30 files. That said, the SOTA models are... kinda okay in general.hackable_sand
4 days ago
People are also tired of rolling their eyes
pabs3
5 days ago
Start a worker-owned tech co-op? Not much point though, since people are going to pay AI to write their code instead, and so the market for consultants will dry up. Probably lots of market space for fixing up broken AI code though :)
entropicdrifter
4 days ago
Don't you think consultants get hired to fix up code? If so, why would their market dry up? If anything, I would expect it to explode
pabs3
4 days ago
That was what I said in my last sentence. The market that will dry up is writing code in the first place.
foxylad
5 days ago
I own my company so have no fear of losing my job - indeed I'd love to offload all the development I do, so I have no resentment against AI.
But I also really care about the quality of our code, and so far my experiments with AI have been disappointing. The empirical results described in this article ring true to me.
AI definitely has some utility, just as the last "game changer" - blockchain - does. But both technologies have been massively oversold, and there will be many, many tears before bedtime.
munificent
5 days ago
> Are you motivated by reason, or resentment?
I think most people are motivated by values. Reason and emotion are merely tools one can use in service of those.
My experience that people who hew too strongly to the former tend to be more oblivious to what's going on in their psychology than most.
scrubs
4 days ago
" ..hate ai..."
Bad framing and worse argument. It's emotional.
Every engineer here is evaluating what ai claims it can do as pronounced by ceos and managers (not expert in software dev) v reality. Follow the money.
Terr_
4 days ago
> Follow the money.
Yeah, it's frustrating to see someone opine "critics are motivated by resentment rather than facts" as if it were street-smart savvy psychoanalysis... while completely ignoring how many influential voices boosting the concept have a bajillions of dollars in motive to speak as credulously and optimistically as possible.
DrillShopper
4 days ago
> It's an emotional argument, born of tribalism. I’d find it easier to believe many claims on this site that AI is all a big scam and such if it weren’t so obvious that this underlies your very motivated reasoning.
Damn, when did it become wrong for me to advocate in my best interests while my boss is trying to do the same by shoving broken and useless AI tools up my ass?
wolvesechoes
4 days ago
It is not tribalism - I am self-aware enough to recognize my self-interest, and it in conflict with the interests of Sam Altmans of this world and modern slave-masters, sorry, managers.
But I am not claiming that AI is useless. It is useful, but I would rather destroy every data center that enjoy strengthening of techno-feudalism.
ellen364
4 days ago
Love a bit of source analysis.
I'd widen the frame a bit. People scared of losing their jobs might underestimate the usefulness of AI. Makes sense to me, it's the comforting belief. Worth keeping in mind while reading articles sceptical of AI.
But there's another side to this conversation: the people whose writing is pro AI. What's motivating them? What's worth keeping in mind while reading that writing?
SrslyJosh
5 days ago
I know it's probably childish and irrational and a symptom of my inferior intellect, but I have to ask, where's the proof that any of this shit works as well as AI stans claim it does?
Please, enlighten me with your gigantic hyper-rational brain.
atleastoptimal
4 days ago
If you believe AI is overvalued and is a bubble waiting to burst then you are free to short NVDA.
AI stans don’t become AI stans for no reason. They see the many enormous technological leaps and also see where progress is going. The many PhDs currently making millions at labs also have the right idea.
Just look at ChatGPT’s growth alone. No product in history compares, and it’s not an accident.
JohnMakin
4 days ago
This makes the logical fallacy that reasoning born out of resentment is always wrong, of course. It is possible for someone to be as you describe and also correct - I imagine this armchair psychoanalysis is way off though.
gdbsjjdn
5 days ago
Did you read TFA, which shows that developers are slower with AI and think they're faster?
The two types of responses to AI I see are your very defensive type, and people saying "I don't get it".
Mikhail_Edoshin
5 days ago
Bruce Tognazzini wrote that people always claim keyboard is faster than mouse but when researchers actually measured that, it turned out mouse was faster. Bruce explained that mousing is a low-cognition activity compared to keying so subjective perceptions are skewed.
jact
4 days ago
This is highly misleading: https://danluu.com/keyboard-v-mouse/
Mikhail_Edoshin
4 days ago
Tognazzini wrote a magazine column with all the downsides: overly funny, non-academic, etc. I think Tog meant something like selecting commands from a menu vs using a command line across a range of applications. Anyway, studies like that must be somewhere in Proceedings of CHI, I guess. (Just checked bibliography in "Tog on interface", but nothing seemed to match. Found a comparison of different types of menus, but that's different. But also relevant: I guess most people would say using pop-up menu right at the mouse cursor will be faster than a fixed one at the top of the screen, yet the experiment shows the opposite.)
Mousing implies things are visible and you merely point to them. Keyboard implies things are non-visible and you recall commands from memory. These two must have a principal difference. Many animals use tools: inanimate objects lying around that can be employed for some gain. Yet no animal makes a tool. Making a tool is different from using it because to make a tool one must foresee the need for it. And this implies a mental model of the world and the future, i.e. a very big change compared to simply using a suitable object on the spot. (The simplest "making" could be just carrying an object when there is no immediate need for it, e.g. a sufficiently long distance. Looks very simple and I myself do not know if any animals exhibit such behavior, it seems to be on the fence. It would be telling if they don't.)
I think the difference between mousing and keying is about as big as of using a tool and making a tool. Of course, if we use the same app all day long, then its keys become motor movements, but this skill remains confined to the app.
atleastoptimal
5 days ago
The article is one person recording their own use of AI, finding no statistical significance but claiming since that the evaluated ratio of AI:human speed in performing various coding tasks resembled the METR study, that AI has no value. People have already talked about issues with the METR study, but importantly with that study and this blog post, it querying a small number of people using AI tools for the first time, working in a code base they already have experience and deep understanding of.
Their claim following that is that because there hasn't been an exponential growth in App store releases, domain name registrations or Steam games, that, beyond just AI producing shoddy code, AI has led to no increase in the amount of software at all, or none that could be called remarkable or even notable in proportion to the claims made by those at AI companies.
I think this ignores the obvious signs of growth in AI companies which providing software engineering and adjacent services via AI. These companies' revenues aren't emerging from nothing. People aren't paying them billions unless there is value in the product.
These trends include
1. The rapid growth of revenue of AI model companies, OpenAI, Anthropic, etc. 2. The massive growth in revenue of companies that use AI including Cursor, replit, loveable etc 3. The massive valuation of these companies
Anecdotally, with AI I can make shovelware apps very easily, spin them up effortlessly and fix issues I don't have the expertise or time to do myself. I don't know why the author of TFA claims that he can't make a bunch of one-off apps with capabilities avaliable today when it's clear that many many people can, have done so, have documented doing so, have made money selling those apps, etc.
ThrowawayR2
5 days ago
> "These companies' revenues aren't emerging from nothing. People aren't paying them billions unless there is value in the product."
Oh, of course not. Just like people weren't paying vast sums of money for beanie babies and dotcoms in the late 1990s and mortgage CDOs in the late 2000s [EDIT] unless there was value in the product.
atleastoptimal
5 days ago
Those are fundamentally different. If people on this site really can't tell the difference then it makes sense why people on HN assume AI is a bubble.
People paid a lot for beanie babies and various speculative securities on the assumption that they could be sold for more in the future. They were assets people aimed to resell at a profit. They had no value by themselves.
The source of revenue for AI companies has inherent value but is not a resell-able asset. You can't resell API calls you buy from an AI company at some indefinite later date. There is no "market" for reselling anything you purchase from a company that offers use of a web app and API calls.
smackeyacky
5 days ago
The central issue here is whether the money pouring into AI companies is producing anything other than more AI companies.
I think the article's premise is basically correct - if we had a 10x explosion of productivity where is the evidence? I would think some is potentially hidden in corporate / internal apps but despite everyone at my current employer using these tools we don't seem to be going any faster.
I will admit that my initial thoughts on Copilot were that "yes this is faster" but that was back when I was only using it for rote / boilerplate work. I've not had a lot of success trying to get it to do higher level work and that's also the experience of my co-workers.
I can certainly see why a particular subset of programmers find the tools particularly compelling, if their job was producing boilerplate then AI is perfect.
atleastoptimal
5 days ago
Yeah AI code is ideal for boilerplate, converting between languages, basically anything where the success criteria are definite. I don’t think there is a 10x productivity upgrade across the board, but in limited domains, yes, AI can produce human level work 10x faster.
The fundamental difference of opinion people have here though is some people see current AI capabilities as a floor, while others see it as a ceiling. I’d agree with arguments that AI companies are overvalued if current models are as capable as AI will ever be for the rest of time, but clearly that is not the case, and very likely, as they have been every few months over the past few years, they will keep getting better.
card_zero
4 days ago
Which way is the rate of change going?
queenkjuul
4 days ago
Dotcoms and CDOs absolutely had perceived intrinsic value
guappa
4 days ago
> The article is one person recording their own use of AI
It's not ONE person. I agree that it's not "every single human being" either, but more of a preliminary result, but I don't understand why you discount results you dislike. I thought you were completely rational?
https://www.theregister.com/2025/07/11/ai_code_tools_slow_do...
The_Fox
5 days ago
> The rapid growth of revenue of AI model companies, OpenAI, Anthropic, etc.
You can't use growth of AI companies as evidence to refute the article. The premise is that it's a bubble. The growth IS the bubble, according to the claim.
> I don't know why the author of TFA claims that he can't make a bunch of one-off apps
I agree... One-off apps seem like a place where AI can do OK. Not that I care about it. I want AI that can build and maintain my enterprise B2B app just as well as I can in a fraction of the time, and that's not what has been delivered.
atleastoptimal
5 days ago
Bubbles are born out of evaluations, not revenue. Web3 was a bubble because the money its made wasn't real productivity, but hype cycles, pyramid schemes, etc. AI companies are merely selling API calls, there is no financial scheming, it is very simply that the product is worth what it is being sold for.
> I want AI that can build and maintain my enterprise B2B app just as well as I can in a fraction of the time, and that's not what has been delivered.
AI isn't at that level yet but it is making fast strides in subsets of it. I can't imagine systems of models and the models themselves won't reach there in a couple years given how bad AI coding tools were just a couple years ago.
guappa
4 days ago
Does the revenue cover the costs?
queenkjuul
4 days ago
I'm motivated by Claude Code producing useless garbage every time i ask it to do anything, and Google giving me AI summaries about things that don't exist
jcgrillo
4 days ago
> their apparent success
Yeah so the thing is the "success" is only "apparent". Having actually tried to use this garbage to do work, as someone who has been deeply interested in ML for decades, I've found the tools to be approximately useless. The "apparent success" is not due to any utility, it's due entirely to marketing.
I don't fear I'm missing out on anything. I've tried it, it didn't work. So why are my bosses a half dozen rungs up on the corporate ladder losing their entire minds over it? It's insanity. Delusional.
dreadnip
5 days ago
I don’t agree. HN is full of technical people, and technical people see LLMs for what they truly are: pattern matching text machines. We just don’t buy into the AGI hype because we’ve seen nothing to support it.
I’m not concerned for my job, in fact I’d be very happy if real AGI would be achieved. It would probably be the crowning tech achievement of the human race so far. Not only would I not have to work anymore, the majority of the world wouldn’t have to. We’d suddenly be living in a completely different world.
But I don’t believe that’s where we’re headed. I don’t believe LLMs in their current state can get us there. This is exactly like the web3 hype when the blockchain was the new hip tech on the block. We invent something moderately useful, with niche applications and grifters find a way to sell it to non technical people for major profit. It’s a bubble and anyone who spends enough time in the space knows that.
DonHopkins
4 days ago
>This is exactly like the web3 hype when the blockchain was the new hip tech on the block. We invent something moderately useful, with niche applications and grifters find a way to sell it to non technical people for major profit.
LLMs are not anything like Web3, not "exactly like". Web3 is in no way whatsoever "something moderately useful", and if you ever thought it was, you were fooled by the same grifters when they were yapping about Web3, who have now switched to yapping about LLMs.
The fact that those exact same grifters who fooled you about Web3 have moved onto AI has nothing to do with how useful what they're yapping about actually is. Do you actually think those same people wouldn't be yapping about AI if there was something to it? Yappers gonna yap.
But Web3 is 100% useless bullshit, and AI isn't: they're not "exactly alike".
Please don't make false equivalences between them like claiming they're "exactly like" each other, or parrot the grifters by calling Web3 "moderately useful".
atleastoptimal
5 days ago
Calling LLM's "pattern matching text machines" is a catchy thought-terminating cliche, which accounts to calling a human brain a "blob of fats, salts, and chemicals". It technically makes sense, but it is seeing the forest for the trees, and ignores the fact that this mere pattern patching text machine is doing things people said were impossible a few years ago. The simplicity and seeming mundanity of a technology has no bearing on its potential or emergent properties. A single termite, observed by itself, could never reveal what it could build when assembled together with its brethren.
I agree that there are lots of limitations to current LLM's, but it seems somewhat naive to ignore the rapid pace of improvement over the last 5 years, the emergent properties of AI at scale, especially in doing things claimed to be impossible only years prior (remember when people said LLM's could never do math, or that image models could never get hands or text right?).
Nobody understands with greater clarity or specificity the limitations of current LLM's than the people working in labs right now to make them better. The AGI prognostications aren't suppositions pulled out of the realm of wishful thinking, they exist because of fundamental revelations that have occurred in the development of AI as it has scaled up over the past decade.
I know I claimed that HN's hatred of AI was an emotional one, but there is an element to their reasoning too that leads them down the wrong path. By seeing more flaws than the average person in these AI systems, and seeing the tact with which companies describe their AI offerings to make them seem more impressive (currently) than they are, you extrapolate that sense of "figuring things out" to a robust model of how AI is and must really be. In doing so, you pattern match AI hype to web3 hype and assume that since the hype is similar in certain ways, that it must also be a bubble/scam just waiting to pop and all the lies are revealed. This is the same pattern-matching trap that people accuse AI of making, and see through the flaws of an LLM output while it claims to have solved a problem correctly.
neffy
5 days ago
No, it´s really not - it's exactly what they are. Multi-dimensional pattern matching machines, using massive databases put together from resources like stack overflow, Clegg's (every cheaters go to for assignment answers, massive copyright theft etc.). If that wasn´t the case, there wouldn't be jobs right now writing answers to feed into the databases.
And that´s actually quite useful - given that most of this material is paywalled or blocked from search engines. It´s less useful when you look at code examples that mix different versions of python, and have comments referring to figures on the previous page. I´m afraid it becomes very obvious when you look under the hood at the training sets themselves, just how this is all being achieved.
atleastoptimal
4 days ago
Look into every human’s brain and you’d see the same thing. How many humans can come up with novel, useful patents? How many novel useful patents themselves are just variations of existing tech?
All intelligence is pattern matching, just at different scales. AI is doing the same thing human brains do.
omnicognate
4 days ago
> Look into every human’s brain and you’d see the same thing.
Hard not to respond to that sarcastically. If you take the time to learn anything about neuroscience you'll realise what a profoundly ignorant statement it is.
weweersdfsd
4 days ago
If that is the case, where are the LLM-controlled robots where LLM is simply given access to bunch of sensors and servos, and learns to control them on its own? And why are jailbreaks a thing?
ethanwillis
4 days ago
Seeing as your LLMs need the novel output of human brains to even exist or expand capabilities, quite a lot.
But even if it's not a lot, it's more than the number of LLMs that can invent new meaning which is a grand total of 0.
tjr
4 days ago
If tomorrow, all human beings ceased to exist, barring any in-progress operations, LLMs would go silent, and the machinery they run on would eventually stop functioning.
If tomorrow, all LLMs ceased to exist, humans would carry on just fine, and likely build LLMs all over again, next time even better.
RichardCA
4 days ago
Ironically, I would far prefer the Douglas Adams idea of "Genuine People Personalities" over the current status quo.
If the self checkout scanner at the supermarket started bickering with me for entering the wrong produce code, that would wrap up the whole Turing Test thing for me.
herpdyderp
5 days ago
Oh, they for sure know what they're doing.
DrillShopper
4 days ago
Counterpoint: fuck them, they know exactly what they do (try to extract more work for the exact same pay out of their subordinates)
xbmcuser
5 days ago
For me this is the biggest disconnect the current level of AI/level is not good enough for replacing devs but is good enough to automate a lot of office work ie managers that would have cost to much time and effort to automate before. I think google seems to understand this a bit as they have replaced a lot of middle management because of AI and not as many developers.
insane_dreamer
4 days ago
also customer service; I was at my dental office today and there were 3 people handling checkin/checkout. I'm quite confident 80% of their workload could be automated away to where you would just need a single person to handle edge cases. That's where we're going to see a lot of entry-level jobs go away, in many domains.
phatskat
3 days ago
> That's where we're going to see a lot of entry-level jobs go away, in many domains.
And to me this is worse news. People in higher paying jobs are the ones that would hurt the economic fabric more, but by that token they’d have more power and influence to ensure a better safety net for the inevitable rise of AI and automation in much of the workforce.
Entry level workers can’t afford to not work, they can’t afford to protest or advocate, they can’t afford the future that AI is bringing closer to their doorsteps. Without that safety net, they’ll be struggling and impoverished. And then will everyone in the higher paying positions help, or will we ignore the problem until AI actually is capable of replacing us, and will it be too late by then?
insane_dreamer
3 days ago
> will everyone in the higher paying positions help, or will we ignore the problem until AI actually is capable of replacing us, and will it be too late by then
if history is anything to go by, it'll be the latter, sadly
vkou
5 days ago
I'd like to see those SVPs and PMs, or shit, even a line manager use AI to implement something as simple as a 2-month intern project[1] in a week.
---
[1] We generally budget about half an intern's time for finding the coffee machine, learning how to show up to work on time, going on a fun event with the other interns to play minigolf, discovering that unit tests exist, etc, etc.
elevatortrim
5 days ago
I actually built something (a time tracking tool that helps developers log their time consistently on jira and harvest) that most developers in my company use in under a week.
I have backend development background so I was able to review the BE code and fix some bugs. But I did not bother learning Jira and Harvest API specs at all, AI (cursor+sonnet 4) figured it out all.
I would not be able to write the front-end of this. It is JS based and updates the UI based on real-time http requests (forgot the name of this technology, the new ajax that is) and I do not have time to learn it but again, I was able to tweak what AI generated and make it work.
Not only AI helped me do something in much shorter than it would take, it enabled me do something that otherwise would not be possible.
panarchy
5 days ago
I'd rather see those SVPs, PMs, and line managers be turned into AI.
curvaturearth
4 days ago
This is the way
sotix
4 days ago
At my company, the tech leaders aren't doing it out of mass hysteria. They're very smart individuals. The push is coming from our investors that come from the ring of classic YC-affiliated VCs. My friend who runs a YC-backed company has been told to do it by his investors too. It's a coordinated effort by external investors rather than a mass panic by individual tech leaders. If you read VC investor literature, it's full of incredible claims about how companies who don't use AI will be left behind. The exact type of stuff you'd expect to hear from groups who aim to hit the lottery with a few of their investments.
baxtr
5 days ago
AI has become the ultimate excuse for weak managers to pressure tech folks.
bambax
4 days ago
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".
Tell him to code it himself then? If it can be done with only prompting, and he's able to type a sentence or two in a web form, what's stopping him?
boothby
5 days ago
> The mass hysteria among SVPs and PMs is absolutely insane right now, I've never seen anything like it.
This isn't entirely foreign to me; it sure looks a lot like the hype train of the dot-com bubble. My experience says that if you're holding stocks in a company going down this road, I'd say they have very low long-term value. Even if you think there's room to grow, bubbles pop fast and hard.
gedy
5 days ago
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".
Challenge your manager to a race, have him vibe code
KronisLV
4 days ago
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".
This sounds incredibly stupid. It’s going to take as long as it will and if they’re not okay with that, their delusional estimates should be allowed to crash and burn, which would hopefully be a learning experience.
The problem is that sometimes there’s an industry wide hysteria even towards useful tech - like doing a lift and shift of a bunch of monoliths to AWS to be “cloud scale”, introducing Kubernetes or serverless without the ability to put either to good use, NoSQL for use cases it’s not good at and most recently AI.
I think LLMs will eventually weather the hype cycle and it will settle down on what they’re actually kinda okay at vs not, the question is how many livelihoods will be destroyed along the way (alongside all the issues with large scale AI datacenter deployments).
On a personal level, it feels like you should maybe do the less ethical thing of asking your employer in the ballpark of 1000-3000 USD for Claude credits a month, babysitting it enough to do the 20% of the work in the 20% new estimate, babysit it enough to ship a functional MVP and when they complain about missing functionality tell them that the AI tech just isn't mature enough but thankfully you'll be able to swoop in and salvage it for only the remaining 80% of the remaining estimate's worth of work.