fny
6 days ago
I really don't understand how people given access to a pile of tools and data sources and unleash them on customers. It's horrible UX in my experience and at times worse than a phone tree.
My view is that you need to transition slowly and carefully to AI first customer support.
1. Know the scope of problems an AI can solve with high probability. Related prompt: "You can ONLY help with the following issues."
2. Escalate to a human immediately if its out of scope: "If you cannot help, escalate to a human immediately by CCing bob@smallbiz.co"
3. Have an "unlocked agent" that your customer service person can use to answer a question and evaluate how well the agent performs in helping. Use this to drive your development roadmap.
4. If the "unlocked agent" becomes good at solving a problem, add that to the in-scope solutions.
Finally, you should probably have some way to test existing conversations when you make changes. (It's on my TODO list)
I've implemented this for a few small businesses, and the process is so seamless that no one has suspected interaction with an AI. For one client, there's not even a visible escalation step: they get pinged on their phone and take over the chat!
myhf
6 days ago
The purpose of customer support is to convince the customer that it is not worth their time to pursue support. A worse experience achieves that goal faster.
Using GenAI is a huge breakthrough in this field, because it is a socially acceptable way to tell someone you don't care about their issue.
politelemon
6 days ago
You've articulated it better than I could. I think, reading through this author's post, they've misunderstood the objectives.
The purpose has been achieved, in that there is a large drop rate. The product manager has met their goals, cut costs, and might be looking forward to their bonus.
It would be far more expensive to make the LLM behave effectively than it would be to do nothing. Any product manager that sincerely cared about customer support wouldn't be inflicting a personalised callous disregard for service. Instead they'd be focusing on improving documentation, help, and processes. But that's not innately quantifiable in a way that leads to bonuses, and therefore goes unnoticed.
risyachka
6 days ago
>> really don't understand how people given access to a pile of tools and data sources and unleash them on customers
It’s pretty simple. When a non-tech person sees faked demos of what it can do - it looks epic and everyone extrapolates results and thinks AI is that good.
small_scombrus
6 days ago
Doubly so if the person deciding what gets implemented doesn't really get what their staff actually do.
LLMs ability to give convincing sounding answers is like catnip for service desk managers who have never actually been on the desk itself
android521
5 days ago
A lot of the agent tools/frameworks don't dare to have an agent on the site to answer user questions. For those who dares, it sucks. eg. Mastra.ai is supposed to be a framework for building agents but their agent on the website cannot answer any question ( i asked ~20 questions and got 0 satisfactory answers)
user
5 days ago