dmarwicke
a month ago
this has to be insanely expensive right? merge sort does n log n comparisons, so sorting 100 items is like 600 llm calls
dkalola
a month ago
Yes, that's right. This is not a practical approach given current LLM limitations. But perhaps in the future if the cost of each individual call decreases, and if latency decreases (and models run locally), but context window remains limited this approach could be more useful as it uses pairwise comparisons rather than loading the entire context window.