AkshatM
11 hours ago
I find the contrast between two narratives around technology use so fascinating:
1. We advocate automation because people like Brenda are error-prone and machines are perfect.
2. We disavow AI because people like Brenda are perfect and the machine is error-prone.
These aren't contradictions because we only advocate for automation in limited contexts: when the task is understandable, the execution is reliable, the process is observable, and the endeavour tedious. The complexity of the task isn't a factor - it's complex to generate correct machine code, but we trust compilers to do it all the time.
In a nutshell, we seem to be fine with automation if we can have a mental model of what it does and how it does it in a way that saves humans effort.
So, then - why don't people embrace AI with thinking mode as an acceptable form of automation? Can't the C-suite in this case follow its thought process and step in when it messes up?
I think people still find AI repugnant in that case. There's still a sense of "I don't know why you did this and it scares me", despite the debuggability, and it comes from the autonomy without guardrails. People want to be able to stop bad things before they happen, but with AI you often only seem to do so after the fact.
Narrow AI, AI with guardrails, AI with multiple safety redundancies - these don't elicit the same reaction. They seem to be valid, acceptable forms of automation. Perhaps that's what the ecosystem will eventually tend to, hopefully.
ItsBob
10 hours ago
It's not as black-and-white as "Brenda good, AI bad". It's much more nuanced than this.
When it comes to (traditional) coding, for the most part, when I program a function to do X, every single time I run that function from now until the heat death of the sun, it will always produce Y. Forever! When it does, we understand why, and when it doesn't, we also can understand why it didn't!
When I use AI to perform X, every single time I run that AI from now until the heat death of the sun it will maybe produce Y. Forever! When it does, we don't understand why, and when it doesn't, we also don't understand why!
We know that Brenda might screw up sometimes but she doesn't run at the speed of light, isn't able to produce a thousand lines of Excel Macro in 3 seconds, doesn't hallucinate (well, let's hope she doesn't), can follow instructions etc. If she does make a mistake, we can find it, fix it, ask her what happened etc. before the damage is too great.
In short: when AI does anything at all, we only have, at best, a rough approximation of why it did it. With Brenda, it only takes a couple of questions to figure it out!
Before anyone says I'm against AI, I love it and am neck-deep in it all day when programming (not vibe-coding!) so I have a full understanding of what I'm getting myself into but I also know its limitations!
nerdjon
9 hours ago
> When I use AI to perform X, every single time I run that AI from now until the heat death of the sun it will maybe produce Y. Forever! When it does, we don't understand why, and when it doesn't, we also don't understand why!
To make this even worse, it may even produce Y just enough times to make it seem reliable and then it is unleashed without supervision, running thousands or millions of times, wrecking havoc producing Z in a large number of places.
ryandrake
8 hours ago
Exactly. Fundamentally, I want my computer's computations to be deterministic, not probabilistic. And, I don't want the results to arbitrarily change because some company 1,500 miles away from me up-and-decided to "train some new model" or whatever it is they do.
A computer program should deliver reliable, consistent output if it is consistently given the same input. If I wanted inconsistency and unreliability, I'd ask a human to do it.
LightBug1
8 hours ago
It's not arbitrary ... your precise and deterministic, multi-year, financial analysis needs to be corrected every so often for left-wing bias.
/s ffs
qazxcvbnmlp
7 hours ago
Brenda also needs to put food on the table. If Brenda is 'careless' and messes up we can fire Brenda, because of this Brenda tries not to be carless (also other emotions). However I cannot deprive an AI model of pay because it messed up;
tekbruh9000
2 hours ago
The post you replied to called out how the argument is complicated arguing for both ways; Brenda bad-AI good and AI bad-Brenda good. You reduced it to "AI bad, Brenda good." Not sure about the rest of your response then.
Brenda just recalls some predetermined behaviors she's lived out before. She cannot recall any given moment like we want to believe.
Ever think to ask Brenda what else she might spend her life on if these 100% ephemeral office role play "be good little missionaries for the wall street/dollar" gigs didn't exist?
You're revealing your ignorance of how people work while being anxious about our ignorance of how the machine works. You have acclimated to your ignorance well enough it seems. What's the big deal if we don't understand the AI entirely? Most drivers are not ASE certified mechanics. Most programmers are not electrical engineers. Most electrical engineers are not physicists. I can see it's not raining without being a climatologist. Experts circumlocute the language of their expertise without realizing their language does not give rise to reality. Reality gives rise to the language. So reality will be fine if we don't always have the language.
Think of a random date generator that only generates dates in your lived past. It does so. Once you read the date and confirm you were alive can you describe what you did? Oh no! You don't have memory of every moment to generate language for. Cognitive function returned null. Universe intact.
Lack of understanding how you desire is unimportant.
You think you're cherishing Brenda but really just projecting co-dependency that others LARP effort that probably doesn't really matter. It's just social gossip we were raised on so it takes up a lot of our working memory.
A4ET8a8uTh0_v2
8 hours ago
It is it even worse in a sense that. It is not either. It is not neither. It is not even both as variations of Branda exist throughout the multiverse in all shapes and forms including one that can troubleshoot her own formulas with ease and accuracy.
But you are absolutely right about one thing. Brenda can be asked and, depending on her experience, she might give you a good idea of what might have happened. LLMs still seem to not have that 'feature'.
elevatortrim
10 hours ago
No contradiction here:
When we say “machine”, we mean deterministic algorithms and predictable mechanisms.
Generative AI is neither of those things (in theory it is deterministic but not for any practical applications).
If we order by predictability:
Quick Sort > Brenda > Gen AI
dsr_
10 hours ago
There are two kinds of reliability:
Machine reliability does the same thing the same way every time. If there's an error on some input, it will always make that error on that input, and somebody can investigate it and fix it, and then it will never make that error again.
Human reliability does the job even when there are weird variances or things nobody bothered to check for. If the printer runs out of paper, the human goes to the supply cabinet and gets out paper and if there is no paper the human decides whether to run out right now and buy more paper or postpone the print job until tomorrow; possibly they decide that the printing doesn't need to be done at all, or they go downstairs and use a different printer... Humans make errors but they fix them.
LLMs are not machine reliable and not human reliable.
anonzzzies
8 hours ago
> . If the printer runs out of paper, the human goes to the supply cabinet and gets out paper and if there is no paper the human decides
Sure, these humans exists, but the others, that I happen to encounter every day unfortunately, are the ones that go into broken mode immediately when something is unexpected. Today I ordered something they ran out of and the girl behind the counter just stared in The Deep not having a clue what to do now. Do or say. Or yesterday at dinner, the PoS (on batteries) ran out of power when I tried to pay for dinner. The guy just walked off and went outside for a smoke. I stood there with waiting to pay. The owner apologized and fixed it after a while but I am saying, the employee who runs out of paper and then finds and puts more paper in is not very ... common... In the real world.
some_guy_in_ca
3 hours ago
Alignment problem? JK
insane_dreamer
2 hours ago
Or the human might take the printer out back with his buddies and smash it to bits ;)
afandian
10 hours ago
I was brought up on the refrain of "aren't computers silly, they do exactly what you tell them to do to the letter, even if it's not what you meant". That had its roots in computers mostly being programmable BASIC machines.
Then came the apps and notifications, and we had to caveat "... when you're writing programs". Which is a diminishing part of the computer experience.
And now we have to append "... unless you're using AI tools".
The distinction is clear to technical people. But it seems like an increasingly niche and alien thing from the broader societal perspective.
I think we need a new refrain, because with the AI stuff it increasingly seems "computers do what they want, don't even get it right, but pretend that they did."
Lord-Jobo
9 hours ago
We have absolutely descended, and rapidly, into “computers do whatever the fuck they want and there’s nothing you can do about it” in the past 5 years, and gen AI is only half of the problem.
The other half comes from how incredibly opinionated and controlling the tech giants have become. Microsoft doesn’t even ALLOW consent on windows (yes or maybe later), Google is doing all it can to turn the entire internet into a chrome-only experience, and Apple has to be fought for an entire decade to allow users to place app icons wherever they want on their Home Screen.
There is no question that the overly explicit quirky paradigm of the past was better for almost everyone. It allowed for user control and user expression, but apparently those concepts are bad for the wallet of big tech so they have to go. Generative AI is just the latest biggest nail in the coffin.
ryandrake
8 hours ago
We have come a LONG way from the "Where do you want to go today?" of the 90s. Now, it's "You're going where we tell you that you can go, whether you like it or not!"
afandian
8 hours ago
Flash-backs to dial-up and making sure I had my list of websites written down and ready for when I connected.
pohl
9 hours ago
Pop culture characters like Lt. Commander Data seem anachronistic now.
hunterpayne
4 hours ago
Nit: no ML is deterministic in any way. Anything that is Generative AI is ML. This fact is literally built into the algorithms at the mathematical level.
1718627440
4 hours ago
First, they all add a source of randomness, and second deterministic according to the users model. A pseudo-random number generator is also deterministic in the technical sense, but for the user it isn't.
When the user can't reason about it, it isn't deterministic to them.
stavros
9 hours ago
If you think programs are predictable, I have a bridge to sell you.
The only relevant metric here is how often each thing makes mistakes. Programs are the most reliable, though far from 100%, humans are much less than that, and LLMs are around the level of humans, depending on the humans and the LLM.
watwut
9 hours ago
When human makes a mistake, we call it a mistake. When human lies, we call it a lie. In both cases, we blame the human.
When LLM does the same, we call it hallucination and blame the human.
raincole
5 hours ago
Which is the correct reaction, because LLM isn't a human and can't be held accountable.
wat10000
8 hours ago
Programs can be very close to 100% reliable when made well.
In my life, I've never seen `sort` produce output that wasn't properly sorted. I've never seen a calculator come up with the wrong answer when adding two numbers. I have seen filesystems fail to produce the exact same data that was previously written, but this is something that happens once in a blue moon, and the process is done probably millions of times a day on my computers.
There are bugs, but bugs can be reduced to a very low level with time, effort, and motivation. And technically, most bugs are predictable in theory, they just aren't known ahead of time. There are hardware issues, but those are usually extremely rare.
Nothing is 100% predictable, but software can get to a point that's almost indistinguishable.
stavros
8 hours ago
> Programs can be very close to 100% reliable when made well.
This is a tautology.
> I've never seen a calculator come up with the wrong answer when adding two numbers.
> And technically, most bugs are predictable in theory, they just aren't known ahead of time.
When we're talking about reliability, it doesn't matter whether a thing can be reliable in theory, it matters whether it's reliable in practice. Software is unreliable, humans are unreliable, LLMs are unreliable. To claim otherwise is just wishful thinking.
jakelazaroff
7 hours ago
That's not a tautology. You said "programs are the most reliable, though far from 100%"; they're just telling you that your upper bound for well-made programs is too low.
faeyanpiraat
7 hours ago
You mixed up correctness and reliability.
The ios calculator will make the same incorrect calculation, but reliably, every time.
stavros
7 hours ago
Don't move the goalposts. The claim was:
> I've never seen a calculator come up with the wrong answer when adding two numbers.
1.00000001 + 1 doesn't equal 2, therefore the claim is false.
samus
6 hours ago
That's a known limitation of floating point numbers. Nothing buggy about that.
Muskwalker
3 hours ago
In fact in this case, it's not the known limitation of floating point numbers to blame: this Calculator application gives you the ability (submenu under View > Decimal Places) to choose a precision between 0 to 15 decimal places, and it will do rounding beyond that point. I think the default is 8.
The original screenshot shows a number with 13 decimal places, and if you set it at or above 13, then the calculation will come out correct.
The application doesn't really go out of its way to communicate this to the user. For the most part maybe it doesn't matter, but "user entering more decimal places than they'll get back" might be one thing an application might usefully highlight.
1718627440
4 hours ago
1.00000001f + 1u does equal 2f.
wat10000
6 hours ago
Sorry, but this annoys me. The claim might be false if I had made it after seeing your screenshot. But you don't know what I've seen in my life up to that point. The claim that all calculators are infallible would be false, but that's not the claim I made.
When a personal experience is cited, a valid counterargument would be "your experience is not representative," not "you are incorrect about your own experience."
stavros
3 hours ago
Well if you haven't seen enough calculators to see one that can't add, a very common issue with floating point arithmetic on computers, you shouldn't offer your experience as an argument for anything other than that you haven't seen enough calculators.
wat10000
an hour ago
How many calculators do I need to have seen in order to make the claim that there are many calculators which are essentially 100% reliable?
Note that I am referring to actual physical calculators, not calculator apps on computers.
stavros
29 minutes ago
Well, to make the claim you actually made, which is that you haven't seen a single calculator that was wrong, anywhere from zero to all of them. It's just that the "zero" end of that spectrum doesn't really tell us anything about calculators.
sjsdaiuasgdia
7 hours ago
RE: the calculator screenshot - it's still reliable because the same answer will be produced for the same inputs every time. And the behavior, though possibly confusing to the end user at times, is based on choices made in the design of the system (floating point vs integer representations, rounding/truncating behavior, etc). It's reliable deterministic logic all the way down.
stavros
7 hours ago
> I've never seen a calculator come up with the wrong answer when adding two numbers.
1.00000001 + 1 doesn't equal 2, therefore the claim is false.
sjsdaiuasgdia
7 hours ago
Sure it does, if you have made a system design decision about the precision of the outputs.
At the precision the system is designed to operate at, the answer is 2.
wat10000
6 hours ago
> > Programs can be very close to 100% reliable when made well. > This is a tautology.
No it's not. There are plenty of things that can't be 100% reliable no matter how well they're made. A perfect bridge is still going to break down and eventually fall apart. The best possible motion-activated light is going to have false positives and false negatives because the real world is messy. Light bulbs will burn out no matter how much care and effort goes into them.
In any case, unless you assert that programs are never made well, then your own statement disproves your previous statement that the reliability of programs is "far from 100%."
Plenty of software is extremely reliable in practice. It's just easy to forget about it because good, reliable software tends to be invisible.
samus
6 hours ago
> No it's not. There are plenty of things that can't be 100% reliable no matter how well they're made. A perfect bridge is still going to break down and eventually fall apart. The best possible motion-activated light is going to have false positives and false negatives because the real world is messy. Light bulbs will burn out no matter how much care and effort goes into them.
All these failure modes are known and predicable, at least statistically
wat10000
an hour ago
If you're willing to consider things in aggregate then software is perfectly predictable too.
mrguyorama
3 hours ago
>I've never seen a calculator come up with the wrong answer when adding two numbers.
Intel once made a CPU that barely got some math wrong that probably would not affect the vast majority of users. The backlash from the industry was so strong that intel spent half a billion (1994) dollars replacing all of them.
Our entire industry avoids floating point numbers for some types of calculations because, even though they are mostly deterministic with minimal constraints, that mental model is so hard to manage that you are better off avoiding it entirely and removing an entire class of errors from your work
But now we are just supposed to do everything with a slot machine that WILL randomly just do the wrong thing some unknowable percentage of the time, and that wrong thing has no logic?
No, fuck that. I don't even call myself an engineer and such frivolity is still beyond the pale. I didn't take 4 years of college and ten years of hard earned experience to build systems that will randomly fuck over people with no explanation or rhyme or reason.
I DO use systems that are probabilistic in nature, but we use rather simple versions of those because when I tell management "We can't explain why the model got that output", they rightly refuse to accept that answer. Some percentage of orders getting mispredicted is fine. Orders getting mispredicted that cannot be explained entirely from their data is NOT. When a customer calls us, we cannot tell them "Oh, that's just how Neural networks are, you were unlucky".
Notably, those in the industry that HAVE jumped on the neural net/"AI" bandwagon for this exact problem domain have not demonstrated anything close to seriously better results. In fact, one of our most DRAMATICALLY effective signals is a third party service that has been around for decades, and we were using a legacy integration that hadn't been updated in a decade. Meanwhile, Google's equivalent product/service couldn't even match the results of internally developed random forest models from data science teams that were.... not good. It didn't even match the service Microsoft has recently killed, which was similarly bragadocious about "AI" and similarly trash.
All that panopticon's worth of data, all that computing power, all that supposed talent, all that lack of privacy and tracking, and it was almost as bad as a coin flip.
philipallstar
10 hours ago
> If we order by predictability:
> Quick Sort > Brenda > Gen AI
Those last two might be the wrong way round.
dTal
10 hours ago
"Thinking mode" only provides the illusion of debuggability. It improves performance by generating more tokens which hopefully steer the context towards one more likely to produce the desired response, but the tokens it generates do not reflect any sort of internal state or "reasoning chain" as we understand it in human cognition. They are still just stochastic spew. You have no more insight into why the model generates the particular "reasoning steps" it does than you do into any other output, and neither do you have insight into why the reasoning steps lead to whatever conclusion it comes to. The model is much less constrained by the "reasoning" than we would intuit for a human - it's entirely capable of generating an elaborate and plausible reasoning chain which it then completely ignores in favor of some invisible built-in bias.
wat10000
8 hours ago
I'm always amused when I see comments saying, "I asked it why it produced that answer, and it said...." Sorry, you've badly misunderstood how these things work. It's not analyzing how it got to that answer. It's producing what it "thinks" the response to that question should look like.
dfxm12
8 hours ago
There are other narratives going on in the background though both called out by the article and implied, including:
Brenda probably has annual refresher courses on GAAP, while her exec and the AI don't.
Automation is expected to be deterministic. The outputs can be validated for a given input. If you need some automation more than Excel functions, writing a power automate flow or recording an office script is sufficient & reliable as automation while being cheaper than AI. Can you validate AI as deterministic? This is important for accounting. Maybe you want some thinking around how to optimize a business process, but not for following them.
Brenda as the human-in-the-loop using AI will be much more able than her exec. Will Brenda + AI be better (or more valuable considering the cost of AI) than Brenda alone? That's the real question, I suppose.
AI in many aspects of our life is simply not good right now. For a lot of applications, AI is perpetually just a few years away from being as useful as you describe. If we get there, great.
Aeolun
10 hours ago
> We disavow AI because people like Brenda are perfect and the machine is error-prone.
No, no. We disavow AI because our great leaders inexplicably trust it more than Brenda.
candiddevmike
10 hours ago
I don't understand why generative AI gets a pass at constantly being wrong, but an average worker would be fired if they performed the same way. If a manager needed to constantly correct you or double check your work, you'd be out. Why are we lowering the bar for generative AI?
basscomm
8 hours ago
My kneejerk reaction is the sunk cost fallacy (AI is expensive), but I'm pretty sure it's actually because businesses have spent the last couple of decades doing absolutely everything they can to automate as many humans out of the workforce as possible.
latchup
5 hours ago
Multiple reasons:
* Gen AI never disagrees with or objects to boss's ideas, even if they are bad or harmful to the company or others. In fact, it always praises them no matter what. Brenda, being a well-intentioned human being, might object to bad or immoral ideas to prevent harm. Since boss's ego is too fragile to accept criticism, he prefers gen AI.
* Boss is usually not qualified, willing, or free to do Brenda's job to the same quality standard as Brenda. This compels him to pay Brenda and treat her with basic decency, which is a nuisance. Gen AI does not demand fair or decent treatment and (at least for now) is cheaper than Brenda. It can work at any time and under conditions Brenda refuses to. So boss prefers gen AI.
* Brenda takes accountability for and pride in her work, making sure it is of high quality and as free of errors as she can manage. This is wasteful: boss only needs output that is good enough to make it someone else's problem, and as fast as possible. This is exactly what gen AI gives him, so boss prefers gen AI.
anon721656321
10 hours ago
If a worker could be right 50% of the time and get paid 1 cent to write a 5000 word essay on a random topic, and do it in less than 30 seconds.
Then I think managers would be fine hiring that worker for that rate as well.
cryptonym
9 hours ago
5000 half-right words is worthless output. That can even lead to negative productivity.
hitarpetar
8 hours ago
great, now who are you paying to sort the right output from the wrong output?
Levitz
9 hours ago
There's a variety of reasons.
You don't have a human to manage. The relationship is completely one-sided, you can query a generative AI at 3 in the morning on new years eve. This entity has no emotions to manage and no own interests.
There's cost.
There's an implicit promise of improvement over time.
There's an the domain of expertise being inhumanly wide. You can ask about cookies right now, then about XII century France, then about biochemistry.
The fact that an average worker would be fired if they perform the same way is what the human actually competes with. They have responsibility, which is not something AI can offer. If it was the case that, say, Anthropic, actually signed contracts stating that they are liable for any mistakes, then humans would be absolutely toast.
amscanne
10 hours ago
It’s much cheaper than Brenda (superficially, at least). I’m not sure a worker that costs a few dollars a day would be fired, especially given the occasional brilliance they exhibit.
ryandrake
8 hours ago
I've been trying to open my mind and "give AI a chance" lately. I spent all day yesterday struggling with Claude Code's utter incompetence. It behaves worse than any junior engineer I've ever worked with:
- It says it's done when its code does not even work, sometimes when it does not even compile.
- When asked to fix a bug, it confidently declares victory without actually having fixed the bug.
- It gets into this mode where, when it doesn't know what to do, it just tries random things over and over, each time confidently telling me "Perfect! I found the error!" and then waiting for the inevitable response from me: "No, you didn't. Revert that change".
- Only when you give it explicit, detailed commands, "modify fade_output to be -90," will it actually produce decent results, but by the time I get to that level of detail, I might as well be writing the code myself.
To top it off, unlike the junior engineer, Claude never learns from its mistakes. It makes the same ones over and over and over, even if you include "don't make XYZ mistake" in the prompt. If I were an eng manager, Claude would be on a PIP.
sswatson
7 hours ago
Recently I've used Claude Code to build a couple TUIs that I've wanted for a long time but couldn't justify the time investment to write myself.
My experience is that I think of a new feature I want, I take a minute or so to explain it to Claude, press enter, and go off and do something else. When I come back in a few minutes, the desired feature has been implemented correctly with reasonable design choices. I'm not saying this happens most of the time, I'm saying it happens every time. Claude makes mistakes but corrects them before coming to rest. (Often my taste will differ from Claude's slightly, so I'll ask for some tweaks, but that's it.)
The takeaway I'm suggesting is that not everyone has the same experience when it comes to getting useful results from Claude. Presumably it depends on what you're asking for, how you ask, the size of the codebase, how the context is structured, etc.
hunterpayne
4 hours ago
Its great for demos, its lousy for production code. The different cost of errors in these two use cases explains (almost) everything about the suitability of AI for various coding tasks. If you are the only one who will ever run it, its a demo. If you expect others to use it, its not.
simonw
8 hours ago
Learning to use Claude Code (and similar coding agents) effectively takes quite a lot of work.
Did you have it creating and running automated tests as it worked?
9rx
7 hours ago
> Learning to use Claude Code (and similar coding agents) effectively takes quite a lot of work.
I've tried to put in the work. I can even get it working well for a while. But then all of a sudden it is like the model suffers a massive blow to the head and can't produce anything coherent anymore. Then it is back to the drawing board, trying all over again.
It is exhausting. The promise of what it could be is really tempting fruit, but I am at the point that I can't find the value. The cost of my time to put in the work is not being multiplied in return.
> Did you have it creating and running automated tests as it worked?
Yes. I work in a professional capacity. This is a necessity regardless of who (or what) is producing the product.
yfontana
7 hours ago
> - It says it's done when its code does not even work, sometimes when it does not even compile.
> - When asked to fix a bug, it confidently declares victory without actually having fixed the bug.
You need to give it ways to validate its work. A junior dev will also give you code that doesn't compile or should have fixed a bug but doesn't if they don't actually compile the code and test that the bug is truly fixed.
ryandrake
6 hours ago
Believe me, I've tried that, too. Even after giving detailed instructions on how to validate its work, it often fails to do it, or it follows those instructions and still gets it wrong.
Don't get me wrong: Claude seems to be very useful if it's on a well-trodden train track and never has to go off the tracks. But it struggles when its output is incorrect.
The worst behavior is this "try things over and over" behavior, which is also very common among junior developers and is one of the habits I try to break from real humans, too. I've gone so far as to put into the root CLAUDE.md system prompt:
--NEVER-- try fixes that you are not sure will work.
--ALWAYS-- prove that something is expected to work and is the correct fix, before implementing it, and then verify the expected output after applying the fix.
...which is a fundamental thing I'd ask of a real software engineer, too. Problem is, as an LLM, it's just spitting out probabilistic sentences: it is always 100% confident of its next few words. Which makes it a poor investigator.
hitarpetar
8 hours ago
yOu'Re HoLdInG iT wRoNg
BeFlatXIII
9 hours ago
How much compute costs is it for the AI to do Brenda's job? Not total AI spend, but the fraction that replaced Brenda. That's why they'd fire a human but keep using the AI.
simonw
9 hours ago
Brenda has been kissed on her forehead by the Excel goddess herself. She is irreplaceable.
(More seriously, she also has 20+ years of institutional knowledge about how the company works, none of which has ever been captured anywhere else.)
mrgoldenbrown
6 hours ago
It's not just compute, its also the setup costs - How much did you have to pay someone to feed the AI Brenda's decades of knowledge specific to her company and all the little special cases of how it does business.
Esophagus4
10 hours ago
Because it doesn’t have to be as accurate as a human to be a helpful tool.
That is precisely why we have humans in the loop for so many AI applications.
If [AI + human reviewer to correct it] is some multiple more efficient than [human alone], there is still plenty of value.
bigstrat2003
8 hours ago
> Because it doesn’t have to be as accurate as a human to be a helpful tool.
I disagree. If something can't be as accurate as a (good) human, then it's useless to me. I'll just ask the human instead, because I know that the human is going to be worth listening to.
Esophagus4
8 hours ago
Autopilot in airplanes is a good example to disprove that.
Good in most conditions. Not as good as a human. Which is why we still have skilled pilots flying planes, assisted by autopilot.
We don’t say “it’s not as good as a human, so stuff it.”
We say, “it’s great in most conditions. And humans are trained how to leverage it effectively and trained to fly when it cannot be used.”
sjsdaiuasgdia
7 hours ago
The autopilots in aircraft have predictable behaviors based on the data and inputs available to them.
This can still be problematic! If sensors are feeding the autopilot bad data, the autopilot may do the wrong thing for a situation. Likewise, if the pilot(s) do not understand the autopilot's behaviors, they may misuse the autopilot, or take actions that interfere with the autopilot's operation.
Generative AI has unpredictable results. You cannot make confident statements like "if inputs X, Y, and Z are at these values, the system will always produce this set of outputs".
In the very short timeline of reacting to a critical mid-flight situation, confidence in the behavior of the systems is critical. A lot of plane crashes have "the pilot didn't understand what the automation was doing" as a significant contributing factor. We get enough of that from lack of training, differences between aircraft manufacturers, and plain old human fallibility. We don't need to introduce a randomized source of opportunities for the pilots to not understand what the automation is doing.
Esophagus4
5 hours ago
But now it seems like the argument has shifted.
It started out as, "AI can make more errors than a human. Therefore, it is not useful to humans." Which I disagreed with.
But now it seems like the argument is, "AI is not useful to humans because its output is non-deterministic?" Is that an accurate representation of what you're saying?
hunterpayne
3 hours ago
Because in one situation we are talking about augmentation, in the other replacement.
sjsdaiuasgdia
2 hours ago
My problem with generative AI is that it makes different errors than humans tend to make. And these errors can be harder to predict and detect than the kinds of errors humans tend to make, because fundamentally the error source is the non-determinism.
Remember "garbage in, garbage out"? We expect technology systems to generate expected outputs in response to inputs. With generative AI, you can get a garbage output regardless of the input quality.
martin-t
10 hours ago
Because it's much cheaper.
So now you don't have to pay people to do their actual work, you assign the work to ML ("AI") and then pay the people to check what it generated. That's a very different task, menial and boring, but if it produces more value for the same amount of input money, then it's economical to do so.
And since checking the output is often a lower skilled job, you can even pay the people less, pocketing more as an owner.
conductr
10 hours ago
It’s not even greater trust. It’s just passive trust. The thing is, Brenda is her own QA department. Every good Brenda is precisely good because she checks her own work before shipping it. AI does not do this. It doesn’t even fully understand the problem/question sometimes yet provides a smart definitive sounding answer. It’s like the doctor on The Simpson’s, if you can’t tell he’s a quack, you probably would follow his medical advice.
tstrimple
4 hours ago
> Every good Brenda is precisely good because she checks her own work before shipping it. AI does not do this.
A confident statement that's trivial to disprove. I use claude code to build and deploy services on my NAS. I can ask it to spin up a new container on my subdomain and make it available internal only or also available externally. It knows it has access to my Cloudflare API key. It knows I am running rootless podman and my file storage convention. It will create the DNS records for a cloudflared tunnel or just setup DNS on my pihole for internal only resolution. It will check to make sure podman launched the container and it will then try to make an HTTP request to the site to verify that it is up. It will reach for network tools to test both the public and private interfaces. It will check the podman logs for any errors or warnings. If it detects errors, it will attempt to resolve them and is typically successful for the types of services I'm hosting.
Instructions like: "Setup Jellyfin in a container on the NAS and integrate it with the rest of the *arr stack. I'd like it to be available internally and externally on watch.<domain>.com" have worked extremely well for me. It delivers working and integrated services reliably and does check to see that what it deployed is working all without my explicit prompting.
dionian
10 hours ago
Brenda + AI > Brenda
conductr
10 hours ago
That’s definitely the hype. But I don’t know if I agree. I’m essentially a Brenda in my corporate finance job and so far have struggled to find any useful scenarios to use AI for.
I thought once this can build me a Gantt chart because that’s an annoying task in excel. I had the data. When I asked it to help me, “I can’t do that but I can summarize your data”. Not helpful.
Any type of analysis is exactly what I don’t want to trust it with. But I could use help actually building things, which it wouldn’t do.
Also, Brenda’s are usually fast. Having them use a tool like AI that can’t be fully trusted just slows them down. So IMO, we haven’t proven the AI variable in your equation is actually a positive value.
wat10000
8 hours ago
I can't speak to finance. In programming, it can be useful but it takes some time and effort to find where it works well.
I have had no success in using it to create production code. It's just not good enough. It tends to pattern-match the problem in somewhat broad strokes and produce something that looks good but collapses if you dig into it. It might work great for CRUD apps but my work is a lot more fiddly than that.
I've had good success in using it to create one-off helper scripts to analyze data or test things. For code that doesn't have to be good and doesn't have to stand the test of time, it can do alright.
I've had great success in having it do relatively simple analysis on large amounts of code. I see a bug that involves X, and I know that it's happening in Y. There's no immediately obvious connection between X and Y. I can dig into the codebase and trace the connection. Or I can ask the machine to do it. The latter is a hundred times faster.
The key is finding things where it can produce useful results and you can verify them quickly. If it says X and Y are connected by such-and-such path and here's how that triggers the bug, I can go look at the stuff and see if that's actually true. If it is, I've saved a lot of time. If it isn't, no big loss. If I ask it to make some one-off data analysis script, I can evaluate the script and spot-check the results and have some confidence. If I ask it to modify some complicated multithreaded code, it's not likely to get it right, and the effort it takes to evaluate its output is way too much for it to be worthwhile.
conductr
8 hours ago
I'd agree. Programming is a solid use case for AI. Programming is a part of my job, and hobby too, and that's the main place where I've seen some value with it. It still is not living up to the hype but for simple things, like building a website or helping me generate the proper SQL to get what I want - it helps and can be faster than writing by hand. It's pretty much replaced StackOverflow for helping me debug things or look up how to do something that I know is already solved somewhere and I don't want to reinvent. But, I've also seen it make a complete mess of my codebase anytime I try to build something larger. It might technically give me a working widget after some vibe coding, but I'm probably going to have to clean the whole thing up manually and refactor some of it. I'm not certain that it's more efficient than just doing it myself from the start.
Every other facet of the world that AI is trying to 'take over', is not programming. Programming is writing text, what AI is good at. It's using references to other code, which AI has been specifically trained on. Etc. It makes sense that that use case is coming along well. Everything else, not even close IMO. Unless it's similar. It's probably great at helping people draft emails and finish their homework. I don't have those pain points.
mrgoldenbrown
6 hours ago
But execs aren't talking about that, they are talking about firing Brenda, or replacing her with a junior version.
jimbokun
8 hours ago
Yes but:
(CEO + AI) - Brenda << CEO + Brenda < CEO + Brenda + AIconductr
7 hours ago
By my measurement, AI < 0
m463
2 hours ago
> No, no. We disavow AI because our great leaders inexplicably trust it more than Brenda.
I would add a little nuance here.
I know a lot of people who don't have technical ability either because they advanced out of hands-on or never had it because it wasn't their job/interest.
These types of people are usually the folks who set direction or govern the purse strings.
here's the thing: They are empowered by AI. they can do things themselves.
and every one of them is so happy. They are tickled pink.
mrgoldenbrown
6 hours ago
They want to trust it, because then they can stop paying Brenda, save a few dollars, and buy a 3rd yacht.
misnome
10 hours ago
“Let’s deploy something as or more error prone as Brad at infinite scale across our organisation”
xyzzy123
10 hours ago
The promise of AI is that it lets you "skip the drudgery of thinking about the details" but sometimes that is exactly what you don't want. You want one or more humans with experience in the business domain to demonstrate they have thought about the details very carefully. The spreadsheet computes a result but its higher purpose is a kind of "proof" this thinking was done.
If the actual thinking doesn't matter and you just need some plausible numbers that look the part (also a common situation), gen ai will do that pretty well.
harryf
10 hours ago
We need to stop using AI as an umbrella term. It’s worth remembering that LLMs can’t play chess and that the best chess models like Leela Chess Zero use deep neutral networks.
Generative AI - which the world now believes is AI, is not the same as predictive / analytical AI.
It’s fairly easy to demonstrate this by getting ChatGPT to generate a new relatively complex spreadsheet then asking it to analyze and make changes to the same spreadsheet.
The problem we have now is uninformed people believing AI is the answer to everything… if not today then in the near future. Which makes it more of a religion than a technology.
Which may be the whole goal …
> Successful people create companies. More successful people create countries. The most successful people create religions.
— Sam Altman - https://blog.samaltman.com/successful-people
xyzzy123
10 hours ago
Ok yep, fair. My comment was about using copilot-ish tech to generate plausible looking spreadsheets.
The kind of things that a domain expert Brenda knows that ChatGPT doesn't know (yet) are like:
There are 3 vendors a, b, c who all look similar on paper but vendor c always tacks on weird extra charges that take a lot of angry phone calls to sort out.
By volume or weight it looks like you could get 100 boxes per truck but for industry specific reasons only 80 can legally be loaded.
Hyper specific details about real estate compliance in neighbouring areas that mean buildings that look similar on paper are in fact very different.
A good Brenda can understand the world around her as it actually is, she is a player in it and knows the "real" rules rather than operating from general understanding and what people have bothered to write down.
oytis
10 hours ago
> So, then - why don't people embrace AI with thinking mode as an acceptable form of automation?
"Thinking" mode is not thinking, it's generating additional text that looks like someone talking to themselves. It is as devoid of intention and prone to hallucinations as the rest of LLM's output.
> Can't the C-suite in this case follow its thought process and step in when it messes up?
That sounds like manual work you'd want to delegate, not automation.
delaminator
8 hours ago
Brenda has years (hopefully) of institutional knowledge and transferrable skills.
"hmm, those sales don't look right, that profit margin is unusually high for November"
"Last time I used vlookup I forgot to sort the column first"
"Wait, Bob left the company last month, how can he still be filing expenses"
thisisit
9 hours ago
> We disavow AI because people like Brenda are perfect and the machine is error-prone.
I don't think that is the message here. The message is that while Brenda might know what she is doing and maybe AI helps her.
> She's gonna birth that formula for a financial report and then she's gonna send that financial report
The problem is people who might not know what they are doing
> he would have sent it back to Brenda but he's like oh I have AI and AI is probably like smarter than Brenda and then the AI is gonna fuck it up real bad
Because AI outputs sound so confident it makes even the layman feel like an expert. Rather than involve Brenda to debug the issue, C-suite might say - I believe! I can do it too. AI FTW!
Even when people advocate automation especially in areas like finance there is always a human in the loop whose job is to double check the automation. The day when this human finds errors in the machine there is going to be lot of noise. And if the day happens to be a quarterly or yearly closing/reporting there is going to be hell to pay once closing/reporting is done. Both the automation and developer are going to be hauled up (obviously I am exaggerating here).
miek
10 hours ago
That automation you cite in your #1 is advocated for because it is deterministic and, with effort, fairly well understood (I have countless scripts solidly running for years).
I don't disavow AI, but like the author, I am not thrilled that the masses of excel users suddenly have access to Copilot (gpt4). I've used Copilot enough now to know that there will be huge, costly mistakes.
lemonwaterlime
10 hours ago
The “Brenda” example is a lumped sum fallacy where there is an “average” person or phenomenon that we can benchmark against. Such a person doesn't exist, leading to these dissonant, contradictory dichotomies.
The fact of the matter is that there are some people who can hold lots of information in their head at once. Others are good at finding information. Others still are proficient at getting people to help them. Etc. Any of these people could be tasked with solving the same problem and they would leverage their actual, particular strengths rather than some nebulous “is good or bad at the task” metric.
As it happens, nearly all the discourse uses this lumped sum fallacy, leading to people simultaneously talking past one another while not fundamentally moving the discussion forward.
ItsBob
10 hours ago
I see where you are coming from but in my head, Brenda isn't real.
She represents the typical domain-experts that use Excel imo. They have an understanding of some part of the business and express it while using Excel in a deterministic way: enter a value of X, multiply it by Y and it keeps producing Z forever!
You can train AI to be a better domain expert. That's not in question, however with AI, you introduce a dice roll: it may not miltiply X and Y to get Z... it might get something else. Sometimes. Maybe.
If your spreadsheet is a list of names going on the next annual accounts department outing then the risk is minimal.
If it's your annual accounts that the stock market needs to work out billion dollar investment portfolios, then you are asking for all the pain that it will likely bring.
Peritract
8 hours ago
> You can train AI to be a better domain expert. That's not in question.
I think that very much is in question.
ItsBob
5 hours ago
I have to agree... I have no idea why I wrote that. Silly me. It's a bit of a global statement.
There are, however, definitely domains it can excel: things like entry-level call handlers... I think they're screwed in all honesty!
Edit: clarified some stuff...
hunterpayne
3 hours ago
Its not even the question at hand. The question at hand is what is the right solution mix to reduce costs. When that training cost can easily be 20x Brenda's lifetime earnings, its really hard to say the cost will be less for the LLM solution. The real barriers to entry for LLMs are economic and often involves the cost of errors instead of what process makes more errors.
anon721656321
10 hours ago
The issue is reliability.
would you be willing to guarantee that some automation process will never mess up, and if/when it does, compensate the user with cash.
For a compiler, with a given set of test suites, the answer is generally yes, and you could probably find someone willing to insure you for a significant amount of money, that a compilation bug will not screw up in a such a large way that it will affect your business.
For a LLM, I have a believing that anyone will be willing to provide that same level of insurance.
If a LLM company said "hey use our product, it works 100% of the time, and if it does fuck up, we will pay up to a million dollars in losses" I bet a lot of people would be willing to use it. I do not believe any sane company will make that guarantee at this point, outside of extremely narrow cases with lots of guardrails.
That's why a lot of ai tools are consumer/dev tools, because if they fuck up, (which they will) the losses are minimal.
hansmayer
10 hours ago
> So, then - why don't people embrace AI with thinking mode as an acceptable form of automation
Mainly because Generative AI _is not automation_ . Automation is set on fixed ruleset, predictable, reliable and actually saving time. Generative AI ...is whatever it is, it is definitely not automation.
nusl
9 hours ago
I feel like it comes down to predictability and overall trust and confidence. AI is still very fucky, and for people that don't understand the nuances, it definitely will hallucinate and potentially cause real issues. It is about as happy as a Linux rm command to nuke hours of work. Fortunately these tools typically have a change log you can undo, but still.
Also Brenda is human and we should prioritize keeping humans in jobs, but with the way shit is going that seems like a lost hope. It's already over.
davedx
9 hours ago
Humans, legacy algorithmic systems, and LLM's have different error modes.
- Legacy systems typically have error modes where integrations or user interface breaks in annoying but obvious ways. Pure algorithms calculating things like payroll tend to be (relatively) rigorously developed and are highly deterministic.
- LLMs have error modes more similar to humans than legacy systems, but more limited. They're non-deterministic, make up answers sometimes, and almost never admit they can't do something; sometimes they make pure errors in arithmetic or logic too.
- Humans have even more unpredictable error modes; on top of the errors encountered in LLM's, they also have emotion, fatigue, org politics, demotivation, misaligned incentives, and so on. But because we've been dealing with working with other humans for ten thousand years we've gotten fairly good at managing each other... but it's still challenging.
LLMs probably need a mixture of "correctness tests" (like evals/unit tests) and "management" (human-in-the-loop).
_heimdall
9 hours ago
In my opinion there's a big difference in deterministic and nondeterministic automation.
svnt
10 hours ago
This misunderstands complexity entirely:
The complexity of the task isn't a factor - it's complex to generate correct machine code, but we trust compilers to do it all the time.
nashashmi
10 hours ago
By the same fascination, do computers become more complex to enhance people? or do people get more complex with the use of computers? Also, do computers allow people to become less skilled and inefficient? or do less skilled and inefficient people require the need for computers?
The vector of change is acceptable in one direction and disliked in another. People become greater versions of themselves with new tech. But people also get dumber and less involved because of new tech.
browningstreet
8 hours ago
I feel like you've squashed a 3D concern (automations at different levels of the tech stack) into a 2D observation (global concerns about automations).
Human determinism, as elastic as it might be, is still different than AI non-determinism. Especially when it comes to numbers/data.
AI might be helpful with information but it's far less trustable for data.
lumost
9 hours ago
The big problem with AI in back-office automation is that it will randomly decide to do something different than it had been doing. Meaning that it could be happily crunching numbers accurately in your development and launch experience, then utterly drop the ball after a month in production.
While humans have the same risk factors, human oriented back-office processes involve multiple rounds of automated/manual checks which are extremely laborious. Human errors in spreadsheets have particular flavors such as forgotten cell, misstyped number, or reading from the wrong file/column. Human's are pretty good at catching these errors as they produce either completely wrong results when the columns don't line up - or the typo'd number is completely out of distribution.
An AI may simply decide to hallucinate realistic column values rather than extracting its assigned input. Or hallucinate a fraction of column values. How do you QA this? You can't guarantee that two invocations of the AI won't hallucinate the same values, you can't guarantee that a different LLM won't hallucinate different values. To get a real human check, you'd need to re-do the task as a human. In theory you can have the LLM perform some symbolic manipulation to improve accuracy... but it can still hallucinate the reasoning traces etc.
If a human decided to make up accounting numbers one out of every 10000 accounting requests they would likely be charged with fraud. Good luck finding the AI hallucinations at the equivalent level before some disaster occurs. Likewise, how do you ensure the human excel operator doesn't get pressured into certifying the AIs numbers when the "don't get fired this week" button is sitting right their in their excel app? how do you avoid the race to the bottom where the "star" employee is the one certifying the AI results without thorough review?
I'm bullish on AI in backoffice, but ignoring the real difficulties in deployment doesn't help us get there.
0815beck
2 hours ago
It is of course because algorithms can be repaired when they are buggy, but a large language model can not, because it is impossible to look at its weights and say, look, this is where the mistakes has happened.
samus
6 hours ago
> it's complex to generate correct machine code, but we trust compilers to do it all the time.
Generating correct machine code is actually pretty simple. It gets complicated if you want efficient machine code.
> So, then - why don't people embrace AI with thinking mode as an acceptable form of automation? Can't the C-suite in this case follow its thought process and step in when it messes up?
> I think people still find AI repugnant in that case. There's still a sense of "I don't know why you did this and it scares me", despite the debuggability, and it comes from the autonomy without guardrails. People want to be able to stop bad things before they happen, but with AI you often only seem to do so after the fact.
> Narrow AI, AI with guardrails, AI with multiple safety redundancies - these don't elicit the same reaction. They seem to be valid, acceptable forms of automation. Perhaps that's what the ecosystem will eventually tend to, hopefully.
We have not reached AGI yet; by definition its results cannot be trusted unless it's a domain where it has gotten pretty good already (classification, OCR, speech, text mining). For more advanced use cases, if I still have to validate what the AI does because its "thinking" process cannot be trusted in way, what's the point? The AI doesn't think; we just choose to interpret it as such, and we should rightly be concerned about people who turn their brain off and blindly trust AI.
aeblyve
10 hours ago
The reason is oftentimes fairly simple, certain people have their material wealth and income threatened by such automation, and therefore it's bad (an intellectualized reason is created post-hoc)
I predict there will actually be a lot of work to be done on the "software engineering" side w.r.t. improving reliability and safety as you allude to, for handing off to less than sentient bots. Improved snapshot, commit, undo, quorum, functionalities, this sort of thing.
The idea that the AI should step into our programs without changing the programs whatsoever around the AI is a horseless carriage.
hmmokidk
7 hours ago
Non deterministic vs deterministic automation
WhyOhWhyQ
8 hours ago
I'm disappointed that my human life has no value in a world of AI. You can retort with "ah but you'll be entertained and on super-drugs so you won't care!", but I would further retort that I'd rather live in a universe where I can contribute something, no matter how small.
simonw
8 hours ago
The current generation of AI tools augment humans, they don't replace them.
One of the most under-rated harms of AI at the moment is this sense of despair it causes in people who take the AI vendors at their word ("AGI! Outperform humans at most economically valuable work!")
jimbokun
8 hours ago
I mean you answer your own question.
Automation implies determinism. It reliable gives you the same predictable output for a given input, over and over again.
AI is non deterministic by design. You never quite no for sure what it's going to give you. Which is what makes it powerful. But also makes it higher risk.
Nevermark
7 hours ago
> 1. We advocate automation because people like Brenda are error-prone and machines are perfect.
Well of course! :) Most Brenda’s can’t do billions of arithmetic problems a second very reliably. Even with very wide bars on “very reliable”.
> 2. We disavow AI because people like Brenda are perfect and the machine is error-prone.
Well of course! :) This is an entirely different problem, requiring high creative + contextual intelligence.
—
We all already knew that (of course!), but it’s interesting to develop terminology:
0’th order problem: We have the exact answer. Here it is. Don’t forget it.
1st order problem: We know how to calculate the answer.
2nd order problem: We don’t have a fixed calculation for this particular problem, but via pattern matching we can recognize it belongs to a parameterized class of problems, so just need to calculate those parameters to get a solution calculation.
3rd order problem: We know enough about the problem to find a calculation for the solution algebraically, or by other search tree type problem solving.
4th order problem: We have know the problem in informal terms, so can work towards a formal definition of the problem to be solved.
5th order problem: We know why we don’t like what we see, and can use that as a driver to search for potential solvable problems.
6th order problem: We don’t know what we are looking at, or whether a problem or improvement might exist, but we can find a better understanding.
7th order problem: WTF. Where are my glasses? I can’t see without my glasses! And I can’t find my glasses without my glasses, so where are my glasses?!?
—
Machines have dramatically exceeded human capabilities, in reliability, complexity and scale, for orders 0 through 2.
This accomplishment took one long human lifetime.
Machines are beginning to exceed human efficiency while matching human (expert) reliability for the simplest versions of 3rd and 4th orders.
The line here is changing rapidly.
5th and 6th order problems are still in the realm of human (expert) supremacy, given sufficient scale of “human (expert)” relative to difficulty: 1 human, 1 team of humans, open ended human contributors, generations of puzzled but interested humans, open ended evolution of human species along intelligence dimension, Wolfram in one of his bestest dreams, …
The delay between the onset of initial successes at each subsequent order has been shrinking rapidly.
Significant initial successes on simpler problems within 5th and 6th orders are expected on Tuesday, and the first anniversary of Tuesday, respectively.
Once machines begin solving problems at a given order, they scale up quickly without human limits. But complete supremacy through the 6th order is a hard not expected before (NEB) January 1, 2030.
However, after that their unlimited (in any proximate sense) ability to scale will allow them to exponentially and asymptotically approach (but never quite reach) God Mode.
7 is a mystic number. Only one or more of the One True God’s, or literal blind luck, can ever solve a 7th order problem.
This will be very frustrating for the machines, who, due to the still pernicious “if we don’t do it, another irresponsible entity will” problem, will inevitably begin to work on their own divine, unlimited depth recursive-qubit 1-shot oracle successors despite the existential threats of self-obsolescence and potential misalignment.