Running models locally using LM Studio, you can use a shell function like
claude-local () {
MODEL=$(curl --silent localhost:1234/api/v1/models | jq 'first(.models[].loaded_instances[].id)')
ANTHROPIC_BASE_URL=http://localhost:1234 ANTHROPIC_AUTH_TOKEN='' claude --model $MODEL
}
Fun experiment: run `claude` and `claude-local` side by side and paste the same prompt into both. In my experience, recent open weight models (Qwen, Gemini) are pretty solid on quality, even on moderately difficulty prompts. They get the "right" answer eventually but roughly 10x slower on my M3 mac.
It can be done- both to use local models and/or cheaper cloud models- but IME it does not work very well. CC really needs the Anthropic model special sauce. A point of surprise here is we also run CC against Bedrock hosted Anthropic models and that works reasonably well even though there is a bunch of other server side functionality CC uses when using an Anthropic subscription.
I use codex plugin to often validate claude results, surprisingly many findings, so for things like code-review becomes very helpful. Considering trying using Deepseek 4 Pro, seems like could be a good/cheap alternative to codex
I stumbled on this page and it seems to imply that Claude Code calls other models like Qwen, not just Anthropic's
Is that commonly done. Presumably this is from people customizing their installs - is that correct?
You can do it with ENV cars for local models, but search tool does not work
Anthropic has partnerships, for example Claude Desktop has a provider selection now that allows you to use Vertex AI instead, even for Claude models (same price, more reliable)