SilverElfin
a month ago
Good idea. I think companies are implementing all this complex stuff on their own today. But many probably also just have tight training of staff on what kind of refunds or discounts they can give, and manage it by sampling some amount of chat logs. It’s low tech but probably works enough to reduce the cost of mistakes.
bhaviav100
a month ago
That’s true today, and it works as long as humans are the primary actors.
The break happens when AI drafts at scale. Training + sampling are after-the-fact controls. By the time a bad commitment is found, the customer expectation already exists.
This is just moving the boundary from social enforcement to a hard system boundary for irreversible actions.
Curious if you’ve seen teams hit that inflection point yet.
chrisjj
a month ago
If the "AI" is remotely as intelligent as the human, the same management solution applies.
If it isn't, then you have no machine smart enough to provide a solution requiring /more/ intelligence.
bhaviav100
a month ago
This isn’t about relative intelligence. Humans can be held accountable after the fact. Systems can’t. Once execution is automated, controls have to move from training and review to explicit enforcement points. Intelligence doesn’t change that requirement.
chrisjj
a month ago
Sufficiently intelligent machines, like sufficiently intelligent humans, can and should be trained to behave as required and can and should be held accountable when they don't.