Show HN: Clean Clode – Clean Messy Terminal Pastes from Claude Code and Codex

4 pointsposted 8 hours ago
by thewojo

2 Comments

westurner

4 hours ago

From https://github.com/google-gemini/gemini-cli/pull/5342#issuec... :

> Would .ipynb format solve for this? Unfortunately there's not yet a markdown format that includes output cells (likely due to the unusability of base64 encoded binary data). There are existing issues TODO to create a new format for Jupyter notebooks; which have notebook-level metadata, cell-level metadata, input cells, and output cells.

API facades like OpenLLM and model routers like OpenRouter have standard interfaces for many or most LLM inputs and outputs. Tools like Promptfoo, ChainForge, and LocalAI also all have abstractions over many models.

What are the open standards for representing LLM inputs, and outputs?

W3C PROV has prov:Entity, prov:Activity, and prov:Agent for modeling AI provenance: who or what did what when.

LLM evals could be represented in W3C EARL Evaluation and Reporting Language.

From https://news.ycombinator.com/item?id=44934531 :

> simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?*