NitpickLawyer
4 days ago
Having the ability to throw math heavy ML papers at the assistants and get simplified explanations / pseudocode back is absolutely amazing, as someone who's forgot most of what I learned in uni, 25+ years back and never really used it since.
tyre
4 days ago
This is where LLMs shine for learning imo: throwing a paper in Claude and getting an overview then being able to ask questions.
Especially for fields that I didn’t study at the Bachelors or Masters level, like biology. Getting to engage with deeper research with a knowledgeable tutor assistant has enabled me to go deeper than I otherwise could.
fastasucan
3 days ago
How do you know its correct? And how do you learn to engage with the theory heavy subject doing it this way?
malux85
2 days ago
How do you know anything is correct? LLMs can be wrong, humans can be wrong, you can be wrong. The motto of the royal society, is "Nullius in verba" which is a latin phrase : "take nobody's word for it," that's LITERALLY the motto of the royal society. Its your job as a scientist and critical thinker to test assumptions, oberve reality and use empirical inquiry to seek truth, and in the process, question ALL sources and test all assertions, from multiple angles if required.
mrwrong
a minute ago
amusing that this comment contains a subtle appeal to authority. "take nobody's word for it" -- you can take the Royal Society's word for that
abhgh
2 days ago
You don't - the way I use LLMs for explanations is that I keep going back and forth between the LLM explanation and Google search /Wikipedia. And of course asking the LLM to cite sources helps.
This might sound cumbersome but without the LLM I wouldn't have (1) known what to search for, in a way (2) that lets me incrementally build a mental model. So it's a net win for me. The only gap I see is coverage/recall: when asked for different techniques to accomplish something, the LLM might miss some techniques - and what is missed depends upon the specific LLM. My solution here is asking multiple LLMs and going back to Google search.
NuclearPM
3 days ago
Ask for sources. Easy.
devin
3 days ago
If you did not study these topics, the chances are good you do not know what questions to even ask, let alone how to ask them. Add to the fact that you don't even know if the original summary is accurate.
tyre
3 days ago
The original summary is the paper’s abstract, which I read. The questions I ask are what I don’t understand or am curious about. Chances are 100% that I know what these are!
I’m not trying to master these subjects for any practical purpose. It’s curiosity and learning.
It’s not the same as taking a class; not worse either. It’s a different type of learning for specific situations.
kurthr
3 days ago
Asking the right questions (in the right language) was important before and it's even more important with LLMs, if you want to get any real leverage out of them.
paulryanrogers
4 days ago
Isn't there a risk that you're engaging with an inaccurate summarization? At some point inaccurate information is worse than no information.
Perhaps in low stakes situations it could at least guarantee some entertainment value. Though I worry that folks will get into high stakes situations without the tools to distinguish facts from smoothly worded slop.
tovej
3 days ago
Yes. I usually test AI assistants by giving them my own work to summarize, and have nearly always found errors in their interpretation of the work.
The texts have to be short and high-level for the assistants to have any chance of accurately explaining them.
slow_typist
3 days ago
I can probably process anything short and highlevel by myself in a reasonable time, and if I can’t, I will know, while the LLM will always simulate perfect understanding.
augment_me
3 days ago
There is, but there is an equal risk if you were to engage about any topic with any teacher you know. Everyone has a bias, and as long as you dont base your worldview and decisions fully on one output you will be fine.
SetTheorist
3 days ago
Experimenting with LLMs, I've had examples like it providing the Cantor Set (a totally disconnected topological space) as an example of a Continuum immediately after it provides the (correct) definition as a non-empty compact, connected (Hausdorff) topological space. This is immediately obvious as nonsense if you understand the topic, but if one was attempting to learn from this, it could be very confusing and misleading. No human teacher would do this.
tyre
3 days ago
I don’t know what any of this means!
But I’m not trying to become an expert in these subjects. If I were, this isn’t the tool I’d use in isolation (which I don’t for these cases anyway.)
Part of reading, questioning, interpreting, and thinking about these things is (a) defining concepts I don’t understand and (b) digging into the levels beneath what I might.
It doesn’t have to be 100% correct to understand the shape and implications of a given study. And I don’t leave any of these interactions thinking, “ah, now I am an expert!”
Even if it were perfectly correct, neither my memory nor understanding is. That’s fine. If I continue to engage with the topic, I’ll make connections and notice inconsistencies. Or I won’t! Which is also fine. It’s right enough to be net (incredibly) useful compared to what I had before.
thoroughburro
3 days ago
It’s my experience that humans are far, far, far more trustworthy about their limitations than LLMs. Obviously, this varies by human.
fastasucan
3 days ago
>but there is an equal risk if you were to engage about any topic with any teacher you know.
No it isnt.
HDThoreaun
3 days ago
I’ve used LLMs to summarize hundreds of papers. Theyve been more accurate than any teacher I’ve known. Summarizing text is one of their best skills.
tipperjones
3 days ago
It’s only equal if you consider two outcomes: some risk and no risk.
And there’s always some risk.
cma
3 days ago
Are you just saying that broadly, e.g. original 2022 chatgpt was also an equal risk if you use this way?
You won't be able to verify everything taught from first principles, so do have to at some point give different sources different credibility I think.
paladin314159
3 days ago
I've been doing this a fair amount recently, and way I manage it is: first, give the LLM the PDF and ask it to summarize + provide high-level reading points. Then read the paper with that context to verify details, and while doing so, ask the LLM follow-up questions (very helpful for topics I'm less familiar with). Typically, everything is either directly in the original paper or verifiable on the internet, so if something feels off then I'll dig into it. Through the course of ~20 papers, I've run into one or two erroneous statements made by the LLM.
To your point, it would be easy to accidentally accept things as true (especially the more subjective "why" things), but the hit rate is good enough that I'm still getting tons of value through this approach. With respect to mistakes, it's honestly not that different from learning something wrong from a friend or a teacher, which, frankly, happens all the time. So it pretty much comes down to the individual person's skepticism and desire for deep understanding, which usually will reveal such falsehoods.
anon291
3 days ago
There is, but just ask it to cite the foundational material. A huge issue with reading papers in topics you don't know about is that you lack the prerequisite knowledge and without a professor in that field, it may be difficult to really build that. Chat GPT is a huge productivity boost. Just ask it to cite references and read those.
user
3 days ago
fragmede
4 days ago
I'm not sure the exact dollar value of feeling safe enough to ask really stupid questions that I should already know the answer to and I'd be embarrassed if anyone saw me ask Claude, but it's more than I'm paying them. Maybe that's the enshittification play. Extra $20/month if you don't want it to sound judgey about your shit.
sesm
3 days ago
How do you verify that the explanation is accurate? Mathematical definitions can be very subtle.
trauco
3 days ago
The answer is you put the top mathematician in the world to do it, easy peasy.
“The argument used some p-adic algebraic number theory which was overkill for this problem. I then spent about half an hour converting the proof by hand into a more elementary proof, which I presented on the site.”
What’s the exchange rate for 30 minutes of Tao’s brain time in regular researcher’s time? 90 days? A year?
gjm11
3 days ago
For that sort of task: no, Tao isn't all that much better than a "regular researcher" at relatively easy work. But the tougher the problems you set them at, the more advantage Tao will have.
... But mathematics gets very specialized, and if it's a problem in a field the other guy is familiar with and Tao isn't, they'll outperform Tao unless it's a tough enough problem that Tao takes the time to learn a new field for it, in which case maybe he'll win after all through sheer brainpower.
Yes, Tao is very very smart, but it's not like he's 100x better at everything than every other mathematician.
codemac
3 days ago
Math notation is high context, so it's great to just ask llm's to print out the low context version in something like lisp where I can read and decompose it quickly.