Show HN: An authority gate for AI-generated customer communication

2 pointsposted a month ago
by bhaviav100

10 Comments

SilverElfin

a month ago

Good idea. I think companies are implementing all this complex stuff on their own today. But many probably also just have tight training of staff on what kind of refunds or discounts they can give, and manage it by sampling some amount of chat logs. It’s low tech but probably works enough to reduce the cost of mistakes.

bhaviav100

a month ago

That’s true today, and it works as long as humans are the primary actors.

The break happens when AI drafts at scale. Training + sampling are after-the-fact controls. By the time a bad commitment is found, the customer expectation already exists.

This is just moving the boundary from social enforcement to a hard system boundary for irreversible actions.

Curious if you’ve seen teams hit that inflection point yet.

chrisjj

a month ago

If the "AI" is remotely as intelligent as the human, the same management solution applies.

If it isn't, then you have no machine smart enough to provide a solution requiring /more/ intelligence.

bhaviav100

a month ago

This isn’t about relative intelligence. Humans can be held accountable after the fact. Systems can’t. Once execution is automated, controls have to move from training and review to explicit enforcement points. Intelligence doesn’t change that requirement.

chrisjj

a month ago

Sufficiently intelligent machines, like sufficiently intelligent humans, can and should be trained to behave as required and can and should be held accountable when they don't.

chrisjj

a month ago

Why do you call this a failure?

This is "AI" parroting humans who made authorised commitments.

If you don't want commitments out, don't feed them in.

bhaviav100

a month ago

I don’t call it a failure of the AI. I agree it’s doing exactly what it was trained to do.

The failure is architectural: once AI is allowed to draft at scale, “don’t feed it commitments” stops being a reliable control. Those patterns exist everywhere in historical data and live context.

At that point the question isn’t training, it’s where you draw the enforcement boundary for irreversible outcomes.

That’s the layer I’m testing.

chrisjj

a month ago

I don't see that scale of drafting makes any difference. Reliability is entirely down to the training.

Also I think confining irreversible outcomes to the results of commitments is unsafe. Consider the irreversible outcome of advice that leads to customer quitting. There isn't a separate "layer" here.

bhaviav100

a month ago

These are two different control problems.

Training governs what a model tends to say. Authority governs what is allowed to be acted on.

You can’t pre-block bad advice, but you can pre-block unapproved financial or contractual actions.

That’s the scope.

chrisjj

a month ago

They are the same problem, since the actions are saying.

"AI-generated messages making commitments no one explicitly approved. Refunds implied. Discounts promised. Renewals renegotiated."

And if you can't block bad advice, you have a bigger problem since you cannot block resultant contractual action by the recipient e.g. termination.