foundry27
12 hours ago
For anyone who hasn’t seen this before, mechanistic interpretability solves a very common problem with LLMs: when you ask a model to explain itself, you’re playing a game of rhetoric where the model tries to “convince” you of a reason for what it did by generating a plausible-sounding answer based on patterns in its training data. But unlike most trends of benchmark numbers getting better as models improve, more powerful models often score worse on tests designed to self-detect “untruthfulness” because they have stronger rhetoric, and are therefore more compelling at justifying lies after the fact. The objective is coherence, not truth.
Rhetoric isn’t reasoning. True explainability, like what overfitted Sparse Autoencoders claim they offer, basically results in the causal sequence of “thoughts” the model went through as it produces an answer. It’s the same way you may have a bunch of ephemeral thoughts in different directions while you think about anything.
stavros
12 hours ago
I want to point out here that people do the same: a lot of the time we don't know why we thought or did something, but we'll confabulate plausible-sounding rhetoric after the fact.
mdp2021
3 hours ago
/Some/ people bullshit themselves stating the plausible; others check their hypotheses.
The difference is total in both humans and automated processes.
stavros
3 hours ago
How are you going to check your hypotheses for why you preferred that jacket to that other jacket?
mdp2021
3 hours ago
Do not lose the original point: some systems have a goal to sound plausible, while some have a goal to say the truth. Some systems, when asked "where have you been", will reply "at the baker's" because it is a nice narrative in their "novel writing, re-writing of reality", some other will check memory and say "at the butcher's", where they have actually been.
When people invent explicit reasons on why they turned left or right, those reasons remain hypotheses. The clumsy will promote those hypotheses to beliefs. The apt will keep the spontaneous ideas as hypotheses, until the ability to assess them comes.
DSingularity
3 hours ago
Is that example representative for the LLM tasks for which we seek explainability ?
stavros
3 hours ago
Are we holding LLMs to a higher standard than people?
f_devd
2 hours ago
Ideally yes, LLMs are tools that we expect to work, people are inherently fallible and (even unintentionally) deceptive. LLMs being human-like in this specific way is not desirable.
stavros
2 hours ago
Then I think you'll be very disappointed. LLMs aren't in the same category as calculators, for example.
f_devd
4 minutes ago
I have no illusions on LLMs, I have been working with them since og BERT, always with these same issues and more. I'm just stating what would be needed in the future to make them reliably useful outside of creative writing & (human-guided & checked) search.
If an LLM provides an incorrect/orthogonal rhetoric without a way to reliably fix/debug it it's just not as useful as it theoretically could be given the data contained in the parameters.
sinuhe69
7 hours ago
Not in math.
TeMPOraL
6 hours ago
Yes in math. Formalisms come after casual thoughts, at every step.
mdp2021
3 hours ago
It's totally different: those formalisms are in a workbench, following a set of rules that either work or not.
So, yes, that (math) is representative of the actual process: pattern recognition gives you spontaneous ideas, that you assess for truthfulness in conscious acts of verification.
sinuhe69
3 hours ago
What is a casual thought that you cannot explain in math?
TeMPOraL
2 hours ago
That question makes no sense. You can explain anything in math, because math is a language and lets you define whatever terms and axioms you need at a given moment.
(Whether or not such explanation is useful for anything is another issue entirely.)
worldsayshi
36 minutes ago
Can you explain how intuition led you to try a certain approach?
LoganDark
7 hours ago
The split-brain experiment is one of my favorites! https://www.youtube.com/watch?v=wfYbgdo8e-8
benreesman
10 hours ago
A lot of the mech interp stuff has seemed to me like a different kind of voodoo: the Integer Quantum Hall Effect? Overloading the term “Superposition” in a weird analogy not governed by serious group representation theory and some clear symmetry? You guys are reaching. And I’ve read all the papers. Spot the postdoc who decided to get paid.
But there is one thing in particular that I’ll acknowledge as a great insight and the beginnings of a very plausible research agenda: bounded near orthogonal vector spaces are wildly counterintuitive in high dimensions and there are existing results around it that create scope for rigor [1].
[1] https://en.m.wikipedia.org/wiki/Johnson%E2%80%93Lindenstraus...
txnf
9 hours ago
Superposition code is a well known concept in information theory - I think there is certainly more to the story then described in the current works, but it does feel like they are going in the right direction
drdeca
9 hours ago
Where are you seeing the integer quantum Hall effect mentioned? Or are you bringing it up rather than responding to it being brought up elsewhere? I don’t understand what the connection between IQHE and these SAE interpretability approaches is supposed to be.
benreesman
8 hours ago
Pardon me, the reference is to the fractional Hall effect.
"But our results may also be of broader interest. We find preliminary evidence that superposition may be linked to adversarial examples and grokking, and might also suggest a theory for the performance of mixture of experts models. More broadly, the toy model we investigate has unexpectedly rich structure, exhibiting phase changes, a geometric structure based on uniform polytopes, "energy level"-like jumps during training, and a phenomenon which is qualitatively similar to the fractional quantum Hall effect in physics, among other striking phenomena. We originally investigated the subject to gain understanding of cleanly-interpretable neurons in larger models, but we've found these toy models to be surprisingly interesting in their own right."
snthpy
8 hours ago
A{rt,I} imitating life
I believe that's why humans reason too. We make snap judgements and then use reason to try to convince others of our beliefs. Can't recall the reference right now but they argued that it's really a tool for social influence. That also explains why people who are good at it find it hard to admit when they are wrong - they're not used to having to do it because they can usually out argue others. Prominent examples are easy to find - X marks de spot.
jamesemmott
an hour ago
I wonder if the reference you are reaching for, if it's not the Jonathan Haidt book suggested by a sibling comment, is The Enigma of Reason by the cognitive psychologists Hugo Mercier and Dan Sperber (2017).
In that book (quoting here from the abstract), Mercier and Sperber argue that reason 'is not geared to solitary use, to arriving at better beliefs and decisions on our own', but rather to 'help us justify our beliefs and actions to others, convince them through argumentation, and evaluate the justifications and arguments that others address to us'. Reason, they suggest, 'helps humans better exploit their uniquely rich social environment'.
They resist the idea (popularized by Daniel Kahneman) that there is 'a contrast between intuition and reasoning as if these were two quite different forms of inference', proposing instead that 'reasoning is itself a kind of intuitive inference'. For them, reason as a cognitive mechanism is 'much more opportunistic and eclectic' than is implied by the common association with formal systems like logic. 'The main role of logic in reasoning, we suggest, may well be a rhetorical one: logic helps simplify and schematize intuitive arguments, highlighting and often exaggerating their force.'
Their 'interactionist' perspective helps explain how illogical rhetoric can be so socially powerful; it is reason, 'a cognitive mechanism aimed at justifying oneself and convincing others', fulfilling its evolutionary social function.
Highly recommended, if you're not already familiar.
mdp2021
4 hours ago
Already before Galileo we had experiments to determine whether ideas represented reality or not. And in crucial cases, long before that, it meant life or death. This will be clear to engineers.
«Reason» is part of that mechanism of vetting ideas. You experience massive failures without it.
So, no, trained judgement is a real thing, and the presence of innumerable incompetent do not prove an alleged absence of the competent.
omgwtfbyobbq
5 hours ago
I think Robert Sapolsky's lectures on yt cover this to some degree around 115.
https://youtu.be/wLE71i4JJiM?feature=shared
Sometimes our cortex is in charge, sometimes other parts of our brain are, and we can't tell the difference. Regardless, if we try to justify it later, that justification isn't always coherent because we're not always using the part of our brain we consider to be rational.
shshshshs
5 hours ago
People who are good at reasoning find it hard to admit that they were wrong?
That’s not my experience. People with reason are.. reasonable.
You mention X and that’s not where the reasoners are. That’s where the (wanna be) politicians are. Rhetoric is not all of reasoning.
I can agree that rationalizing snap judgements is one of our capabilities but I am totally unconvinced that it is the totality of our reasoning capabilities. Perhaps I misunderstood.
Hedepig
4 hours ago
This is not totally my experience, I've debated a successful engineer who by all accounts has good reasoning skills, but he will absolutely double down on unreasonable ideas he's made on the fly he if can find what he considers a coherent argument behind them. Sometimes if I absolutely can prove him wrong he'll change his mind.
But I think this is ego getting in the way, and our reluctance to change our minds.
We like to point to artificial intelligence and explain how it works differently and then say therefore it's not "true reasoning". I'm not sure that's a good conclusion. We should look at the output and decide. As flawed as it is, I think it's rather impressive
mdp2021
3 hours ago
> ego getting in the way
That thing which was in fact identified thousands of years ago as the evil to ditch.
> reluctance to change our minds
That is clumsiness in a general drive that makes sense and is recognized part of the Belief Change Theory: epistemic change is conservative. I.e., when you revise a body of knowledge you do not want to lose valid notions. But conversely, you do not want to be unable to see change or errors, so there is a balance.
> it's not "true reasoning"
If it shows not to explicitly check its "spontaneous" ideas, then it is a correct formula to say 'it's not "true reasoning"'.
briffid
7 hours ago
Jonathan Haidt's The Righteous Mind describes this ín details.
Onavo
11 hours ago
How does the causality part work? Can it spit out a graphical model?
fsndz
11 hours ago
I stopped at: "causal sequence of “thoughts” "
benchmarkist
11 hours ago
Interpretability research is basically a projection of the original function implemented by the neural network onto a sub-space of "explanatory" functions that people consider to be more understandable. You're right that the words they use to sell the research is completely nonsensical because the abstract process has nothing to do with anything causal.
HeatrayEnjoyer
9 hours ago
All code is causal.
benchmarkist
9 hours ago
Which makes it entirely irrelevant as a descriptive term.
mdp2021
3 hours ago
"Servers shall be strict in formulation and flexible in interpretation."