TaxCalcBench: Evaluating Frontier Models on the Tax Calculation Task

61 pointsposted 18 hours ago
by handfuloflight

22 Comments

stared

12 hours ago

A bare model may lack a lot.

Yet a week ago I used Claude Code for my personal finances (not taxes) - I downloaded over a year’s worth of my bank account data. Since I pay for most things by card, if I buy lunch, it’s there.

With a single prompt (and about 10 minutes), it produced an analysis. It solved all the technical issues by itself (e.g., realizing it wasn’t CSV but TSV) and ran quite a few different explorations with Pandas. It was able to write an overview, find items that were likely misclassified, etc.

Everything I checked by hand was correct.

So, instead of pursuing a project to write an AI tool for personal finance, I ended up concluding: “just use Claude Code.” As a side note, I used 14 months of data due to my mistake - I wanted to analyze 2 months of data, since I didn’t believe it would handle a larger set, but I misclicked the year. The file was 350 KB.

jasonjmcghee

7 hours ago

I hear you, but I'd also rather someone else assume the liability if possible. (Assuming there's a company backing the model)

So until there's umbrella AI insurance...

stared

6 hours ago

Exploratory data analysis is one thing - here the risk is low. If something does not work, it doesn't. Small omissions are not important.

As of now, I would not use automatic AI to make any financial decisions with direct consequences. Unless system is tested and benchmarked against accountants.

ofrzeta

16 hours ago

"Calculating US personal income taxes is a task that requires building an understanding of vast amounts of English text and using that knowledge to carefully compute results. ... Our experiment shows that state-of-the-art models succeed in calculating less than a third of federal income tax returns even on this simplified sample set."

Unsurprisingly. Sometimes I feel like I am in a madhouse. Or in an alchemist's laboratory.

michaelrbock

8 hours ago

Hi, author of this paper + repo here. This dataset is particularly hard to come by, so we’re really proud to be open sourcing it.

Let me know if you have any questions, happy to discuss!

anticensor

an hour ago

This topic is so American. In any other country, you wouldn't have had to consult a tax expert to prepare personal tax statements.

antiloper

7 hours ago

> For example, in the prompt for this experiment, the model is bootstrapped with the correct Form 1040 lines and short instructions as part of its context.

Given that only short instructions are in context, I would not have expected even a frontier model to score well on this benchmark. For better results, I'd think that giving the model access to the entire tax code is required (which likely requires RAG due to its sheer size).

michaelrbock

3 hours ago

We tested models with knowledge cutoffs in 2025 so expect them to have knowledge of Tax Year 2024 forms in their weights. We also tested models with ability to do web search to get any other forms it thinks necessary: https://github.com/column-tax/tax-calc-bench

That all being said, we agree, which is what we've built with our internal tax coding agent, Iris: https://www.columntax.com/blog/introducing-iris-our-ai-tax-d... (ability to get just the right Tax form context on a per-line basis to turn the tax law into code).

daft_pink

8 hours ago

I think AI has problems with law related tasks like taxes, because there are so many words of art. Taxes are essentially just laws and because laws and regulators and courts eventually define words in very very specific narrow ways and sometimes in different ways from one code section to another code section, AI has a lot of trouble using these very very narrow definitions.

Honestly, I think humans have trouble with this as well.

Rudybega

14 hours ago

I wonder if you could dramatically improve these results with some relatively simple scaffolding and tool access.

If a ton of these mistakes are genuinely simple calculation errors, it seems like giving the models access to a calculator tool would help a fair bit.

sails

13 hours ago

I feel like we are already there. I would imagine if you set Claude Code or Codex this task, running in the CLI, you would see a huge improvement, and that is before you start creating task specific guardrails.

I’m surprised they haven’t tried this, I’m running my own in parallel against my accountant in this way.

Lionga

13 hours ago

The problem is they do not understand what/how to calculate not the actual act of adding or multiplying. I tried asking ChatGPT to calculate some taxes for three countries, two of which I have been filing taxes already. For the two I know ChatGPT gave wildly wrong numbers (not even right ballpark), so I know I could not trust numbers for the third which was what I was mostly interested in.

cjbarber

12 hours ago

throwaway13337

11 hours ago

Useful.

I wonder what an average accountant would score.

I know LLMs have helped me identify many mistakes accountants have made on my behalf. Some mistakes that could have cost me a lot of money if not caught.

topaz0

9 hours ago

Given that they're restricting to very simple situations I'd expect accountants to score 100%.

anticensor

14 hours ago

Whereas almost every other country tries to make it easier to file taxes, even when the underlying tax schedule is complex.

hodgehog11

14 hours ago

Am I missing something or did they only assess this on Google and Anthropic models? If so, all I can ascertain from this is that latest Gemini models outperformed Claude on this particular task, which should be surprising to no-one. What about GPT-5? Open weight models?

jgalt212

10 hours ago

I'm surprised that no LLM has a yet found any unresolved cycles in the US tax code.