arjie
4 days ago
Okay, I'm going to try it, but why didn't you link the information on how to integrate it with Claude Code: https://docs.z.ai/scenario-example/develop-tools/claude
Chinese software always has such a design language:
- prepaid and then use credit to subscribe
- strange serif font
- that slider thing for captcha
But I'm going to try it out now.
d4rkp4ttern
3 days ago
For models available via Anthropic-compatible (currently Kimi-K2, GLM, Deepseek), the simplest way to use them with CC is by setting up a function in .zshrc:
https://github.com/pchalasani/claude-code-tools/tree/main?ta...
Surprised that Qwen didn’t do the same, (though I know they have their own CLI-coding agent).
laiso
11 hours ago
>Surprised that Qwen didn’t do the same,
It can be used with the following endpoint, but it's not particularly good
export ANTHROPIC_BASE_URL=https://dashscope-intl.aliyuncs.com/api/v2/apps/claude-code-...
By the way, I'm benchmarking these comparisons: https://github.com/laiso/ts-bench/blob/main/src/agents/build...
renewiltord
3 days ago
That's what I did to try it out but you don't have to export the vars.
VAR=value cmd
Will work to run cmd with value for VAR. Honestly it works pretty well. I wonder how much of the magic is in Claude code the tool and prompts vs the LLMpartyboy
3 days ago
> prepaid and then use credit to subscribe
This is mainly because Chinese online payment infrastructure didn't have good support for subscriptions or auto payments (at least until relatively recently) so this pattern is the norm
rfoo
3 days ago
It's more of a culture thing. People just hate the concept of "idk how much I'm going to pay let's just try this and find out later".
Also people would be confused as they expect things to be prepaid, so if you let them use the service they'd think it's a free trial or something, unless you literally put very big, clear price tag and require like triple confirmation. If not and you ask them to pay later they would perceive this as unfair deceptive tricks, and may scam you by report the loss of their credit card (!), because apparently disputing transactions in China is super hard.
tw1984
3 days ago
I have been using alipay to pay for my Tencent Video monthly subscription for the past 8-9 years.
Szpadel
4 days ago
you can use any model with Claude code thanks to https://github.com/musistudio/claude-code-router
but in my testing other models do not work well, looks like prompts are either very optimized for Claude, or other models are just not great yet with such agentic environment
I was especially disappointed with grok code. it is very fast as advertised but in generating spaces and new lines in function calling until it hits max tokens. I wonder if that isn't why it gets so much tokens on openrouter.
gpt-5 just wasn't using the tools very well
I didn't tested glm yet, but with current anthropic subscription value, alternative would need to be very cheap if you consider daily use
edit: I noticed that also have very inexpensive subscription https://z.ai/subscribe, if they trained model to work well with CC this might actually be viable alternative
diggan
3 days ago
> but in my testing other models do not work well, looks like prompts are either very optimized for Claude, or other models are just not great yet with such agentic environment
I think there are multiple things going on. First, models are either trained with tool calling in mind or not, the ones that don't, won't work well as agents. Secondly, each companies models are trained with the agent software in mind, and the agent software is built with their specific models in mind. Thirdly, each model responds differently to different system/user prompts, and the difference can be really stark.
I'm currently working on a tool that lets me execute the same prompts with the same environment over multiple agents. Currently I'm running Codex, Claude Code, Gemini, Qwen Code and AMP for every single change, just to see the differences in responses, and even reusing the same system prompt across all of them gives wildly different results. Not to mention how quickly the quality drops off the cliff as soon as you switch out any non-standard model for any of those CLIs. Mix-and-match models between those five tools, and it becomes clear as day that the model<>software is more interlocked than it seems.
The only project I've had success with switching out the model of, has been using GPT-OSS-120b locally with Codex, but that still required me to manually hack in support for changing the temperature, and changing the prompts Codex use a bit, to get OK results.
oceanplexian
3 days ago
It’s probably not that hard to take the OSS models and fine tune them for CC. Which means with a little bit of reverse engineering and some free time you could get an open source models working perfectly with it.
Claude code router is a good first step. But you also need to MITM CC while it’s running and collect the back and forth for a while. I would do it if I had more free time, surprised someone smart hasn’t already tried.
Szpadel
3 days ago
Of course it already was tried, example: https://github.com/Yuyz0112/claude-code-reverse
CuriouslyC
4 days ago
You don't need claude code router to use GLM, just set the env var to the GLM url. Also, I generally advise people not to bother with claude code router, Bifrost can do the same job and it's much better software.
Szpadel
4 days ago
I wasn't aware that there is an alternative
Quick glance over readme suggest it's only openai compatible but I also found HN post [1] explaining use of claude code with ollama
But anyways, claude-code-router have advantage of allowing request transformers, those are required for getting GitHub copilot as provider and grok-code limitations to messages format.
vitorgrs
3 days ago
That's only if you are using on Z.AI API platform... If you are using from OpenRouter, Chutes, Vercel then it's different (or they expose Anthropics API style and I don't know)
sdesol
4 days ago
> But in my testing, other models do not work well. It looks like prompts are either very optimized for Claude, or other models are just not great yet with such an agentic environment.
Anybody who has done any serious development with LLMs would know that prompts are not universal. The reason why Claude Code is good is because Anthropic knows Claude Sonnet is good, and that they only need to create prompts that work well with their models. They also have the ability to train their models to work with specific tools and so forth.
It really is a kind of fool's errand to try to create agents that can work well with many different models from different providers.
vitorgrs
3 days ago
I actually love Chinese captcha lol No idea why mostly only them use...
vincirufus
4 days ago
Ahh bugger I pasted the wrong link I had this one open in another tab..
tonyhart7
4 days ago
I called it "chinnese chatpcha", back then chinnese chaptcha is so much harder than western counterpart
but now gchaptcha spam me with 5 different image if I missing a tiles for crossroad, so chinnese chaptcha is much better in my opinion
also there is variant that match the image based on shadow and different order of shape
its much better in my opinion because its use much more interactivity, solving western chaptcha is so much mind numbing now that they require you at least multiple image identification for crossroad,sign,cars etc
they want those self driving car are they
awestroke
4 days ago
I assume both of the approaches are useless at actually stopping bots
whatevermom
4 days ago
They deter newbies but this is not a problem for experienced developers.
varelse
3 days ago
[dead]