postalcoder
14 days ago
The best part about this blog post is that none of it is a surprise – Codex CLI is open source. It's nice to be able to go through the internals without having to reverse engineer it.
Their communication is exceptional, too. Eric Traut (of Pyright fame) is all over the issues and PRs.
vinhnx
14 days ago
This came as a big surprise to me last year. I remember they announced that Codex CLI is opensource, and the codex-rs [0] from TypeScript to Rust, with the entire CLI now open source. This is a big deal and very useful for anyone wanting to learn how coding agents work, especially coming from a major lab like OpenAI. I've also contributed some improvements to their CLI a while ago and have been following their releases and PRs to broaden my knowledge.
phrotoma
14 days ago
I know very little about typescript and even less about rust. Am I getting the rust version of codex when I do `npm i -g @openai/codex`?
A stand alone rust binary would be nicer than installing node.
vorticalbox
14 days ago
yes [0]
> The Rust implementation is now the maintained Codex CLI and serves as the default experience
[0] https://github.com/openai/codex/tree/main/codex-rs#whats-new...
alabhyajindal
14 days ago
They should switch to a native installer then. Quite confusing
quinncom
14 days ago
brew install codex
https://developers.openai.com/codex/quickstart/?setup=cliphrotoma
11 days ago
Yeah I'm out here installing a billion node things to have codex hack on my python app. Def gonna look into a standalone rust binary.
Leynos
14 days ago
They're leveraging the (relative) ubiquity of npm amongst developers.
redox99
14 days ago
For some reason a lot of people are unaware that Claude Code is proprietary.
atonse
14 days ago
Probably because it doesn’t matter most of the time?
fragmede
14 days ago
If the software is, say, Audacity, who's target market isn't specifically software developers, sure, but seeing as how Claude code's target market has a lot of people who can read code and write software (some of them for a living!) it becomes material. Especially when CC has numerous bugs that have gone unaddressed for months that people in their target market could fix. I mean, I have my own beliefs as to why they haven't opened it, but at the same time, it's frustrating hitting the same bugs day after day.
rmunn
14 days ago
> ... numerous bugs that have gone unaddressed for months that people in their target market could fix.
THIS. I get so annoyed when there's a longstanding bug that I know how to fix, the fix would be easy for me, but I'm not given the access I need in order to fix it.
For example, I use Docker Desktop on Linux rather than native Docker, because other team members (on Windows) use it, and there were some quirks in how it handled file permissions that differed from Linux-native Docker; after one too many times trying to sort out the issues, my team lead said, "Just use Docker Desktop so you have the same setup as everyone else, I don't want to spend more time on permissions issues that only affect one dev on the team". So I switched.
But there's a bug in Docker Desktop that was bugging me for the longest time. If you quit Docker Desktop, all your terminals would go away. I eventually figured out that this only happened to gnome-terminal, because Docker Desktop was trying to kill the instance of gnome-terminal that it kicked off for its internal terminal functionality, and getting the logic wrong. Once I switched to Ghostty, I stopped having the issue. But the bug has persisted for over three years (https://github.com/docker/desktop-linux/issues/109 was reported on Dec 27, 2022) without ever being resolved, because 1) it's just not a huge priority for the Docker Desktop team (who aren't experiencing it), and 2) the people for whom it IS a huge priority (because it's bothering them a lot) aren't allowed to fix it.
Though what's worse is a project that is open-source, has open PRs fixing a bug, and lets those PRs go unaddressed, eventually posting a notice in their repo that they're no longer accepting PRs because their team is focusing on other things right now. (Cough, cough, githubactions...)
pxc
14 days ago
> I get so annoyed when there's a longstanding bug that I know how to fix, the fix would be easy for me, but I'm not given the access I need in order to fix it.
This exact frustration (in his case, with a printer driver) is responsible for provoking RMS to kick off the free software movement.
fragmede
13 days ago
GitHubactions is a bit of a special case, because it's mostly run in their systems, but that's when you just fork and, I mean, the problems with their (original) branch is their problem.
arthurcolle
14 days ago
They are turning it into a distributed system that you'll have to pay to access. Anyone can see this. CLI is easy to make and easy to support, but you have to invest in the underlying infrastructure to really have this pay off.
Especially if they want to get into enterprise VPCs and "build and manage organizational intelligence"
storystarling
14 days ago
The CLI is just the tip of the iceberg. I've been building a similar loop using LangGraph and Celery, and the complexity explodes once you need to manage state across async workers reliably. You basically end up architecting a distributed state machine on top of Redis and Postgres just to handle retries and long-running context properly.
lomase
14 days ago
[dead]
mi_lk
14 days ago
Same. If you're already using a proprietary model might as well just double down
swores
14 days ago
But you don't have to be restricted to one model either? Codex being open source means you can choose to use Claude models, or Gemini, or...
It's fair enough to decide you want to just stick with a single provider for both the tool and the models, but surely still better to have an easy change possible even if not expecting to use it.
mi_lk
14 days ago
Codex CLI with Opus, or Gemini CLI with 5.2-codex, because they're open sourced agents? Go ahead if you want but show me where it actually happens with practical values
behnamoh
14 days ago
until Microsoft buys it and enshits it.
consumer451
14 days ago
This is a fun thought experiment. I believe that we are now at the $5 Uber (2014) phase of LLMs. Where will it go from here?
How much will a synthetic mid-level dev (Opus 4.5) cost in 2028, after the VC subsidies are gone? I would imagine as much as possible? Dynamic pricing?
Will the SOTA model labs even sell API keys to anyone other than partners/whales? Why even that? They are the personalized app devs and hosts!
Man, this is the golden age of building. Not everyone can do it yet, and every project you can imagine is greatly subsidized. How long will that last?
tern
14 days ago
While I remember $5 Ubers fondly, I think this situation is significantly more complex:
- Models will get cheaper, maybe way cheaper
- Model harnesses will get more complex, maybe way more complex
- Local models may become competitive
- Capital-backed access to more tokens may become absurdly advantaged, or not
The only thing I think you can count on is that more money buys more tokens, so the more money you have, the more power you will have ... as always.
But whether some version of the current subsidy, which levels the playing field, will persist seems really hard to model.
All I can say is, the bad scenarios I can imagine are pretty bad indeed—much worse than that it's now cheaper for me to own a car, while it wasn't 10 years ago.
depr
14 days ago
If the electric grid cannot keep up with the additional demand, inference may not get cheaper. The cost of electricity would go up for LLM providers, and VCs would have to subsidize them more until the price of electricity goes down, which may take longer than they can wait, if they have been expecting LLM's to replace many more workers within the next few years.
andai
14 days ago
The real question is how long it'll take for Z.ai to clone it at 80% quality and offer it at cost. The answer appears to be "like 3 months".
consumer451
14 days ago
This is a super interesting dynamic! The CCP is really good at subsidizing and flooding global markets, but in the end, it takes power to generate tokens.
In my Uber comparison, it was physical hardware on location... taxis, but this is not the case with token delivery.
This is such a complex situation in that regard, however, once the market settles and monopolies are created, eventually the price will be what market can bear. Will that actually create an increase in gross planet product, or will the SOTA token providers just eat up the existing gross planet product, with no increase?
I suppose whoever has the cheapest electricity will win this race to the bottom? But... will that ever increase global product?
___
Upon reflection, the comment above was likely influenced by this truly amazing quote from Satya Nadella's interview on the Dwarkesh podcast. This might be one of the most enlightened things that I have ever heard in regard to modern times:
> Us self-claiming some AGI milestone, that's just nonsensical benchmark hacking to me. The real benchmark is: the world growing at 10%.
https://www.dwarkesh.com/p/satya-nadella#:~:text=Us%20self%2...
YetAnotherNick
14 days ago
With optimizations and new hardware, power is almost a negligible cost that $5/month would be sufficient for all users, contrary to people's belief. You can get 5.5M tokens/s/MW[1] for kimi k2(=20M/KWH=181M tokens/$) which is 400x cheaper than current pricing even if you exclude architecture/model improvements. The thing is currently Nvidia is swallowing up a massive revenue which China could possible solve by investing in R and D.
[1]: https://developer-blogs.nvidia.com/wp-content/uploads/2026/0...
FuckButtons
14 days ago
I can run Minimax-m2.1 on my m4 MacBook Pro at ~26 tokens/second. It’s not opus, but it can definitely do useful work when kept on a tight leash. If models improve at anything like the rate we have seen over the last 2 years I would imagine something as good as opus 4.5 will run on similarly specced new hardware by then.
consumer451
14 days ago
I appreciate this, however, as a ChatGPT, Claude.ai, Claude Code, and Windsurf user... who has tried nearly every single variation of Claude, GPT, and Gemini in those harnesses, and has tested all the those models via API for LLM integrations into my own apps... I just want SOTA, 99% of the time, for myself, and my users.
I have never seen a use case where a "lower" model was useful, for me, and especially my users.
I am about to get almost the exact MacBook that you have, but I still don't want to inflict non-SOTA models on my code, or my users.
This is not a judgement against you, or the downloadable weights, I just don't know when it would be appropriate to use those models.
BTW, I very much wish that I could run Opus 4.5 locally. The best that I can do for my users is the Azure agreement that they will not train on their data. I also have that setting set on my claude.ai sub, but I trust them far less.
Disclaimer: No model is even close to Opus 4.5 for agentic tasks. In my own apps, I process a lot of text/complex context and I use Azure GPT 4.1 for limited llm tasks... but for my "chat with the data" UX, Opus 4.5 all day long. It has tested so superior.
barrenko
14 days ago
Is Azure's pricing competitive on openAI's offerings through the api? Thanks!
consumer451
14 days ago
The last I checked, it is exactly equivalent per token to direct OpenAI model inference.
The one thing I wish for is that Azure Opus 4.5 had json structured output. Last I checked that was in "beta" and only allowed via direct Anthropic API. However, after many thousands of Opus 4.5 Azure API calls with the correct system and user prompts, not even one API call has returned invalid json.
EnPissant
14 days ago
I'm guessing that's ~26 decode tokens/s for 2-bit or 3-bit quantized Minimax-m2.1 at 0 context, and it only gets worse as the context grows.
I'm also sure your prefill is slow enough to make the model mostly unusable, even at smallish context windows, but entirely at mid to large context.
user
14 days ago
stavros
14 days ago
Can't really fault them when this exists:
bad_haircut72
14 days ago
What even is this repo? Its very deceptive
adastra22
14 days ago
Issue tracker for submitting bug reports that no one ever reads or responds to.
stavros
14 days ago
Now that's not fair, I'm sure they have Claude go through and ignore the reports.
adastra22
14 days ago
Unironically yes. If you file a bug report, expect a Claude bot to mark it as duplicate of other issues already reported and close. Upon investigation you will find either
(1) a circular chain of duplicate reports, all closed: or
(2) a game of telephone where each issue is subtly different from the next, eventually reaching an issue that has nothing at all to do with yours.
At no point along the way will you encounter an actual human from Anthropic.
kylequest
14 days ago
By the way, I reversed engineered the Claude Code binary and started sharing different code snippets (on twitter/bluesky/mastadon/threads). There's a lot of code there, so I'm looking for requests in terms of what part of the code to share and analyze what it's doing. One of the requests I got was about the LSP functionality in CC. Anything else you would find interesting to explore there?
I'll post the whole thing in a Github repo too at some point, but it's taking a while to prettify the code, so it looks more natural :-)
lifthrasiir
14 days ago
Not only this would violate the ToS, but also a newer native version of Claude Code precompiles most JS source files into the JavaScriptCore's internal bytecode format, so reverse engineering would soon become much more annoying if not harder.
arianvanp
14 days ago
Claude code is very good at reverse engineering. I reverse engineer Apple products in my MacBook all the time to debug issues
kylequest
14 days ago
Also some WASM there too... though WASM is mostly limited to Tree Sitter for language parsing. Not touching those in phase 1 :-)
embedding-shape
14 days ago
> Not only this would violate the ToS
What specific parts of the ToS does "sharing different code snippets" violate? Not that I don't believe you, just curious about the specifics as it seems like you've already dug through it.
pxc
14 days ago
Using GitHub as an issue tracker for proprietary software should be prohibited. Not that it would, these days.
Codeberg at least has some integrity around such things.
majkinetor
14 days ago
That must be the worst repo I have ever seen.
huevosabio
14 days ago
I frankly don't understand why they keep CC proprietary. Feels to me that the key part is the model, not the harness, and they should make the harness public so the public can contribute.
causalmodels
14 days ago
Yeah this has always seemed very silly. It is trivial to use claude code to reverse engineer itself.
mi_lk
14 days ago
looks like it's trivial to you because I don't know how to
n2d4
14 days ago
If you're curious to play around with it, you can use Clancy [1] which intercepts the network traffic of AI agents. Quite useful for figuring out what's actually being sent to Anthropic.
fragmede
14 days ago
If only there were some sort of artificial intelligence that could be asked about asking it to look at the minified source code of some application.
Sometimes prompt engineering is too ridiculous a term for me to believe there's anything to it, other times it does seem there is something to knowing how to ask the AI juuuust the right questions.
lsaferite
14 days ago
Something I try to explain to people I'm getting up to speed on talking to an LLM is that specific word choices matter. Mostly it matters that you use the right jargon to orient the model. Sure, it's good and getting the semantics of what you said, but if you adjust and use the correct jargon the model gets closer faster. I also explain that they can learn the right jargon from the LLM and that sometimes it's better to start over once you've adjusted you vocabulary.
adastra22
14 days ago
That is against ToS and could get you banned.
Der_Einzige
14 days ago
GenAI was built on an original sin of mass copyright infringement that Aaron Swartz could only have dreamed of. Those who live in glass houses shouldn't throw stones, and Anthropic may very well get screwed HARD in a lawsuit against them from someone they banned.
Unironically, the ToS of most of these AI companies should be, and hopefully is legally unenforceable.
adastra22
14 days ago
Are you volunteering? Look, people should be aware that bans are being handed out for this, lest they discover it the hard way.
If you want to make this your cause and incur the legal fees and lost productivity, be my guest.
fragmede
14 days ago
You're absolutely right! Hey Codex, Claude said you're not very good at reading obfuscated code. Can you tell me what this minified program does?
mlrtime
14 days ago
How would they know what you do on your own computer?
adastra22
14 days ago
Claude is run on their servers.
frumplestlatz
14 days ago
At this point I just assume Claude Code isn't OSS out of embarrassment for how poor the code actually is. I've got a $200/mo claude subscription I'm about to cancel out of frustration with just how consistently broken, slow, and annoying to use the claude CLI is.
andy12_
14 days ago
> how poor the code actually is.
Very probably. Apparently, it's literally implemented with a React->Text pipeline and it was so badly implemented that they were having problems with the garbage collector executing too frequently.
stavros
14 days ago
OpenCode is amazing, though.
skerit
14 days ago
I switched to Opencode a few weeks ago. What a pleasant experience. I can finally resume subagents (which has been Broken in CC for weeks), copy the source of the Assistant's output (even over SSH), have different main agents, have subagents call subagents,... Beautiful.
fragmede
14 days ago
Especially that RCE!
qaz_plm
14 days ago
A new one or one previously patched?
Razengan
14 days ago
Anthropic/Claude's entire UX is the worst among the bunch
sakesun
11 days ago
Clause web is very slow compare to others.
halfcat
14 days ago
What’s the best?
Razengan
14 days ago
In my experience, ChatGPT, and then Grok.
I've posted a lot of feedback about Claude since several months and for example they still don't support Sign in with Apple on the website (but support Sign in with Google, and with Apple on iOS!)
rashidae
14 days ago
Interesting. Have you tested other LLMs or CLIs as a comparison? Curious which one you’re finding more reliable than Opus 4.5 through Claude Code.
frumplestlatz
14 days ago
Codex is quite a bit better in terms of code quality and usability. My only frustration is that it's a lot less interactive than Claude. On the plus side, I can also trust it to go off and implement a deep complicated feature without a lot of input from me.
kordlessagain
14 days ago
Yeah same with Claude Code pretty much and most people don’t realize some people use Windows.
athrowaway3z
14 days ago
I'm almost certain their code is a dumpster fire.
As for your 200$/mo sub. Dont buy it. If you read the fine print, their 20x usage is _per 5h session_, not overall usage.
Take 2x 100$ if you're hitting the limit.
boguscoder
14 days ago
I thought Eric Traut was famous for his pioneering work in virtualization, TIL he has Pyright fame too !
user
14 days ago
appplication
14 days ago
I appreciate the sentiment but I’m giving OpenAI 0 credit for anything open source, given their founding charter and how readily it was abandoned when it became clear the work could be financially exploited.
jstummbillig
14 days ago
> when it became clear the work could be financially exploited
That is not the obvious reason for the change. Training models got a lot more expensive than anyone thought it would.
You can of course always cast shade on people's true motivations and intentions, but there is a plain truth here that is simply silly to ignore.
Training "frontier" open LLMs seems to be exactly possible when a) you are Meta, have substantial revenue from other sources and simply are okay with burning your cash reserves to try to make something happen and b) you copy and distill from the existing models.
seizethecheese
14 days ago
I agree that openAI should be held with a certain degree of contempt, but refusing to acknowledge anything positive they do is an interesting perspective. Why insist on a one dimensional view? It’s like a fraudster giving to charity, they can be praiseworthy in some respect while being overall contemptible, no?
cap11235
14 days ago
Why even acknowledge them in any regard? Put trash where it belongs.
edmundsauto
14 days ago
By this measure, they shouldn’t even try to do good things in small pockets and probably should just optimize for profits!
Fortunately, many other people can deal with nuance.
psychoslave
14 days ago
Is it just a frontend CLI calling remote external logic for the bulk of operations, or does it come with everything needed to run lovely offline? Does it provide weights under FLOW license? Does it document the whole build process and how to redo and go further on your own?