A bare model may lack a lot.
Yet a week ago I used Claude Code for my personal finances (not taxes) - I downloaded over a year’s worth of my bank account data. Since I pay for most things by card, if I buy lunch, it’s there.
With a single prompt (and about 10 minutes), it produced an analysis. It solved all the technical issues by itself (e.g., realizing it wasn’t CSV but TSV) and ran quite a few different explorations with Pandas. It was able to write an overview, find items that were likely misclassified, etc.
Everything I checked by hand was correct.
So, instead of pursuing a project to write an AI tool for personal finance, I ended up concluding: “just use Claude Code.” As a side note, I used 14 months of data due to my mistake - I wanted to analyze 2 months of data, since I didn’t believe it would handle a larger set, but I misclicked the year. The file was 350 KB.
I hear you, but I'd also rather someone else assume the liability if possible. (Assuming there's a company backing the model)
So until there's umbrella AI insurance...
Exploratory data analysis is one thing - here the risk is low. If something does not work, it doesn't. Small omissions are not important.
As of now, I would not use automatic AI to make any financial decisions with direct consequences. Unless system is tested and benchmarked against accountants.
"Calculating US personal income taxes is a task that requires building an understanding of vast amounts of English text and using that knowledge to carefully compute results. ... Our experiment shows that state-of-the-art models succeed in calculating less than a third of federal income tax returns even on this simplified sample set."
Unsurprisingly. Sometimes I feel like I am in a madhouse. Or in an alchemist's laboratory.
Hi, author of this paper + repo here. This dataset is particularly hard to come by, so we’re really proud to be open sourcing it.
Let me know if you have any questions, happy to discuss!
This topic is so American. In any other country, you wouldn't have had to consult a tax expert to prepare personal tax statements.
> For example, in the prompt for this experiment, the model is bootstrapped with the correct Form 1040 lines and short instructions as part of its context.
Given that only short instructions are in context, I would not have expected even a frontier model to score well on this benchmark. For better results, I'd think that giving the model access to the entire tax code is required (which likely requires RAG due to its sheer size).
I think AI has problems with law related tasks like taxes, because there are so many words of art. Taxes are essentially just laws and because laws and regulators and courts eventually define words in very very specific narrow ways and sometimes in different ways from one code section to another code section, AI has a lot of trouble using these very very narrow definitions.
Honestly, I think humans have trouble with this as well.
I'm actually quite surprized.
From another article today, I discovered the IRS has a github repo with (what seems to be) XML versions of tax questions... surely some combination of LLM and structured data querying could solve this? https://github.com/IRS-Public/direct-file/tree/main
I wonder if you could dramatically improve these results with some relatively simple scaffolding and tool access.
If a ton of these mistakes are genuinely simple calculation errors, it seems like giving the models access to a calculator tool would help a fair bit.
I feel like we are already there. I would imagine if you set Claude Code or Codex this task, running in the CLI, you would see a huge improvement, and that is before you start creating task specific guardrails.
I’m surprised they haven’t tried this, I’m running my own in parallel against my accountant in this way.
The problem is they do not understand what/how to calculate not the actual act of adding or multiplying. I tried asking ChatGPT to calculate some taxes for three countries, two of which I have been filing taxes already. For the two I know ChatGPT gave wildly wrong numbers (not even right ballpark), so I know I could not trust numbers for the third which was what I was mostly interested in.
Useful.
I wonder what an average accountant would score.
I know LLMs have helped me identify many mistakes accountants have made on my behalf. Some mistakes that could have cost me a lot of money if not caught.
Given that they're restricting to very simple situations I'd expect accountants to score 100%.
Whereas almost every other country tries to make it easier to file taxes, even when the underlying tax schedule is complex.
Am I missing something or did they only assess this on Google and Anthropic models? If so, all I can ascertain from this is that latest Gemini models outperformed Claude on this particular task, which should be surprising to no-one. What about GPT-5? Open weight models?
I'm surprised that no LLM has a yet found any unresolved cycles in the US tax code.
Oh you mean infinite/zero tax glitches.