dekhn
17 hours ago
The way I look at transformers is: they have been one of the most fertile inventions in recent history. Originally released in 2017, in the subsequent 8 years they completely transformed (heh) multiple fields, and at least partially led to one Nobel prize.
realistically, I think the valuable idea is probabilistic graphical models- of which transformers is an example- combining probability with sequences, or with trees and graphs- is likely to continue to be a valuable area for research exploration for the foreseeable future.
samsartor
16 hours ago
I'm skeptical that we'll see a big breakthrough in the architecture itself. As sick as we all are of transformers, they are really good universal approximators. You can get some marginal gains, but how more _universal_ are you realistically going to get? I could be wrong, and I'm glad there are researchers out there looking at alternatives like graphical models, but for my money we need to look further afeild. Reconsider the auto-regressive task, cross entropy loss, even gradient descent optimization itself.
kingstnap
14 hours ago
There are many many problems with attention.
The softmax has issues regarding attention sinks [1]. The softmax also causes sharpness problems [2]. In general this decision boundary being Euclidean dot products isn't actually optimal for everything, there are many classes of problem where you want polyhedral cones [3]. Positional embedding are also janky af and so is rope tbh, I think Cannon layers are a more promising alternative for horizontal alignment [4].
I still think there is plenty of room to improve these things. But a lot of focus right now is unfortunately being spent on benchmaxxing using flawed benchmarks that can be hacked with memorization. I think a really promising and underappreciated direction is synthetically coming up with ideas and tests that mathematically do not work well and proving that current arhitectures struggle with it. A great example of this is the VITs need glasses paper [5], or belief state transformers with their star task [6]. The Google one about what are the limits of embedding dimensions also is great and shows how the dimension of the QK part is actually important to getting good retrevial [7].
[1] https://arxiv.org/abs/2309.17453
[2] https://arxiv.org/abs/2410.01104
[3] https://arxiv.org/abs/2505.17190
[4] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5240330
[5] https://arxiv.org/abs/2406.04267
ACCount37
13 hours ago
If all your problems with attention are actually just problems with softmax, then that's an easy fix. Delete softmax lmao.
No but seriously, just fix the fucking softmax. Add a dedicated "parking spot" like GPT-OSS does and eat the gradient flow tax on that, or replace softmax with any of the almost-softmax-but-not-really candidates. Plenty of options there.
The reason why we're "benchmaxxing" is that benchmarks are the metrics we have, and the only way by which we can sift through this gajillion of "revolutionary new architecture ideas" and get at the ones that show any promise at all. Of which there are very few, and fewer still that are worth their gains when you account for: there not being an unlimited amount of compute. Especially not when it comes to frontier training runs.
Memorization vs generalization is a well known idiot trap, and we are all stupid dumb fucks in the face of applied ML. Still, some benchmarks are harder to game than others (guess how we found that out), and there's power in that.
thousand_nights
9 hours ago
reason we're benchmaxxing is because there's a huge monetary incentive now to have the best performing model on these synthetic benchmarks and that status is worth a lot of money
literally every new release of something point X model of every major player includes some benchmark graphs to show off
skissane
6 hours ago
> Add a dedicated "parking spot" like GPT-OSS does and eat the gradient flow tax on that
Not familiar with this topic, but intrigued-anywhere I can read more about it?
eldenring
14 hours ago
I think something with more uniform training and inference setups, and otherwise equally hardware friendly, just as easily trainable, and equally expressive could replace transformers.
krychu
13 hours ago
BDH
tim333
13 hours ago
Yeah that thing is quite interesting - baby dragon hatchling https://news.ycombinator.com/item?id=45668408 https://youtu.be/mfV44-mtg7c
mxkopy
7 hours ago
I agree, gradient descent implicitly assumes things have a meaningful gradient, which they don’t always. And even if we say anything can be approximated by a continuous function, we’re learning we don’t like approximations in our AI. Some discrete alternative of SGD would be nice.
jimbo808
17 hours ago
Which fields have they completely transformed? How was it before and how is it now? I won't pretend like it hasn't impacted my field, but I would say the impact is almost entirely negative.
isoprophlex
16 hours ago
Everyone who did NLP research or product discovery in the past 5 years had to pivot real hard to salvage their shit post-transformers. They're very disruptively good at most NLP task
edit: post-transformers meaning "in the era after transformers were widely adopted" not some mystical new wave of hypothetical tech to disrupt transformers themselves.
dingnuts
15 hours ago
Sorry but you didn't really answer the question. The original claim was that transformers changed a whole bunch of fields, and you listed literally the one thing language models are directly useful for.. modeling language.
I think this might be the ONLY example that doesn't back up the original claim, because of course an advancement in language processing is an advancement in language processing -- that's tautological! every new technology is an advancement in its domain; what's claimed to be special about transformers is that they are allegedly disruptive OUTSIDE of NLP. "Which fields have been transformed?" means ASIDE FROM language processing.
other than disrupting users by forcing "AI" features they don't want on them... what examples of transformers being revolutionary exist outside of NLP?
Claude Code? lol
torginus
10 hours ago
I think you're underselling the field of language processing - it wasn't just a single field but a bunch of subfields with their own little journals, papers and techniques - someone who researched machine translation approached problems differently to somebody else who did sentiment analysis for marketing.
I had a friend who did PhD research in NLP and I had a problem of extracting some structured data from unstructured text, and he told me to just ask ChatGPT to do it for me.
Basically ChatGPT is almost always better at language-based tasks than most specialized techniques for the specific problems the subfields meant to address, that were developed over decades.
That's a pretty effing huge deal, even if it falls short of the AGI 2027 hype
dotnet00
14 hours ago
I think they meant fields of research. If you do anything in NLP, CV, inverse-problem solving or simulations, things have changed drastically.
Some directly, because LLMs and highly capable general purpose classifiers that might be enough for your use case are just out there, and some because of downstream effects, like GPU-compute being far more common, hardware optimized for tasks like matrix multiplication and mature well-maintained libraries with automatic differentiation capabilities. Plus the emergence of things that mix both classical ML and transformers, like training networks to approximate intermolecular potentials faster than the ab-initio calculation, allowing for accelerating molecular dynamics simulations.
ComplexSystems
14 hours ago
Transformers aren't only used in language processing. They're very useful in image processing, video, audio, etc. They're kind of like a general-purpose replacement for RNNs that are better in many ways.
rcbdev
12 hours ago
As a professor and lecturer, I can safely assure you that the transformer model has disrupted the way students learn - iin the literal sense of the word.
conartist6
13 hours ago
The goal was never to answer the question. So what if it's worse. It's not worse for the researchers. It's not worse for the CEOs and the people who work for the AI companies. They're bathing in the limelight so their actual goal, as they would state it to themselves, is: "To get my bit of the limelight"
conartist6
13 hours ago
>The final conversation on Sewell’s screen was with a chatbot in the persona of Daenerys Targaryen, the beautiful princess and Mother of Dragons from “Game of Thrones.” > >“I promise I will come home to you,” Sewell wrote. “I love you so much, Dany.” > >“I love you, too,” the chatbot replied. “Please come home to me as soon as possible, my love.” > >“What if I told you I could come home right now?” he asked. > >“Please do, my sweet king.” > >Then he pulled the trigger.
Reading the newspaper is such a lovely experience these days. But hey, the AI researchers are really excited so who really cares if stuff like this happens if we can declare that "therapy is transformed!"
It sure is. Could it have been that attention was all that kid needed?
iknowstuff
14 hours ago
dingnuts
14 hours ago
I'm not watching a video on Twitter about self driving from the company who told us twelve years ago that completely autonomous vehicles were a year away as a rebuttal to the point I made.
If you have something relevant to say, you can summarize for the class & include links to your receipts.
iknowstuff
12 hours ago
your choice, I don't really care about your opinion
DonHopkins
6 hours ago
Then why call yourself "iknowstuff" then prove you don't?
lawlessone
11 hours ago
Then why reply to them?
rootnod3
16 hours ago
So, unless this went r/woosh over my head....how is current AI better than shit post-transformers? If all....old shit post-transformers are at least deterministic or open and not a randomized shitbox.
Unless I misinterpreted the post, render me confused.
isoprophlex
16 hours ago
I wasn't too clear, I think. Apologies if the wording was confusing.
People who started their NLP work (PhDs etc; industry research projects) before the LLM / transformer craze had to adapt to the new world. (Hence 'post-mass-uptake-of-transformers')
dgacmu
16 hours ago
I think you're misinterpreting: "with the advent of transformers, (many) people doing NLP with pre-transformers techniques had to salvage their shit"
numpad0
16 hours ago
There's no post-transformer tech. There are lots of NLP tasks that you can now, just, prompt an LLM to do.
isoprophlex
15 hours ago
Yeah unclear wording; see the sibling comment also. I meant "the tech we have now", in the era after "attention is all you need"
jimmyl02
17 hours ago
in the super public consumer space, search engines / answer engines (like chatgpt) are the big ones.
on the other hand it's also led to improvements in many places hidden behind the scenes. for example, vision transformers are much more powerful and scalable than many of the other computer vision models which has probably led to new capabilities.
in general, transformers aren't just "generate text" but it's a new foundational model architecture which enables a leap step in many things which require modeling!
ACCount37
15 hours ago
Transformers also make for a damn good base to graft just about any other architecture onto.
Like, vision transformers? They seem to work best when they still have a CNN backbone, but the "transformer" component is very good at focusing on relevant information, and doing different things depending on what you want to be done with those images.
And if you bolt that hybrid vision transformer to an even larger language-oriented transformer? That also imbues it with basic problem-solving, world knowledge and commonsense reasoning capabilities - which, in things like advanced OCR systems, are very welcome.
dekhn
17 hours ago
Genomics, protein structure prediction, various forms of small molecule and large molecule drug discovery.
thesz
11 hours ago
No neural protein structure prediction papers I read have compared transformers to SAT solvers.
As if this approach [1] does not exist.
dekhn
11 hours ago
What exactly are you suggesting- that the SAT solver example given in the paper (applied the HP model of protein structure), or an improvement on it, could produce protein structure prediction at the level of AlphaFold?
This seems extremely, extremely unlikely for many reasons. The HP model is a simplification of true protein folding/structure adoption, while AlphaFold (and the open source equivalents) works with real proteins. The SAT approach uses little to no prior knowledge about protein structures, unlike AlphaFold (which has basically memorized and generalized the PDB). To express all the necessary details would likely exceed the capabilities of the best SAT solvers.
(don't get me wrong- SAT and other constraint approaches are powerful tools. But I do not think they are the best approach for protein structure prediction).
YeGoblynQueenne
2 hours ago
(Not the OP) If we've learned one thing from the ascent of neural nets is that you have no idea whether something works or not until you have tried it. And by "tried it" I mean really, really gave it a go as hard as possible, with all resources you can muster. The industry has thrown everything it's got on Transformers, but there are many other approaches that have at least as promising empirical results and much better theoretical support and have not been pursued with the same fervor, so we have no idea how well or bad they'd do against neural nets, if they were given the same treatment.
Like the OP says, it's as if such approaches don't even exist.
chermi
10 hours ago
Hey man, CASP is an open playing field. If it was better, they would've showed it by now.
YeGoblynQueenne
2 hours ago
Said somebody about neural nets in the 1980's.
CHY872
14 hours ago
In computer vision transformers have basically taken over most perception fields. If you look at paperswithcode benchmarks it’s common to find like 10/10 recent winners being transformer based against common CV problems. Note, I’m not talking about VLMs here, just small ViTs with a few million parameters. YOLOs and other CNNs are still hanging around for detection but it’s only a matter of time.
thesz
11 hours ago
Can it be that transformer-based solutions come from the well-funded organizations that can spend vast amount of money on training expensive (O(n^3)) models?
Are there any papers that compare predictive power against compute needed?
nickpsecurity
6 hours ago
You're onto something. BabyLM competition had caps. Many LLM's were using 1TB training data for some time.
In many cases, I can't even see how many GPU hours or what size cluster of what GPU's the pretraining required. If I can't afford it, then it doesn't matter what it achieved. What I can afford is what I have to choose from.
Profan
17 hours ago
hah well, transformative doesn't necessarily mean positive!
econ
15 hours ago
All we get is distraction.
jonas21
16 hours ago
Out of curiosity, what field are you in?
EGreg
16 hours ago
AI fan (type 1 -- AI made a big breakthrough) meets AI defender (type 2 -- AI has not fundamentally changed anything that was already a problem).
Defenders are supposed to defend against attacks on AI, but here it misfired, so the conversation should be interesting.
That's because the defender is actually a skeptic of AI. But the first sentence sounded like a typical "nothing to see here" defense of AI.
blibble
14 hours ago
> but I would say the impact is almost entirely negative.
quite
the transformer innovation was to bring down the cost of producing incorrect, but plausible looking content (slop) in any modality to near zero
not a positive thing for anyone other than spammers
mountainriver
15 hours ago
Software, and it’s wildly positive.
Takes like this are utterly insane to me
Silamoth
15 hours ago
It’s had an impact on software for sure. Now I have to fix my coworker’s AI slop code all the time. I guess it could be a positive for my job security. But acting like “AI” has had a wildly positive impact on software seems, at best, a simplification and, at worst, the opposite of reality.
sponnath
15 hours ago
Wouldn't say it's transformative.
mrieck
14 hours ago
My workflow is transformed. If yours isn’t you’re missing out.
Days that I’d normally feel overwhelmed from requests by management are just Claude Code and chill days now.
sponnath
an hour ago
I've tried to make AI work but a lot of times the overall productivity gains I do get are so negligible that I wouldn't say it's been transformative for me. I think the fact that so many of us here on HN have such different experiences with AI goes to show that it is indeed not as transformative as we think it is (for the field at least). I'm not trying to invalidate your experience.
warkdarrior
16 hours ago
Spam detection and phishing detection are completely different than 5 years ago, as one cannot rely on typos and grammar mistakes to identify bad content.
walkabout
16 hours ago
Spam, scams, propaganda, and astroturfing are easily the largest beneficiaries of LLM automation, so far. LLMs are exactly the 100x rocket-boots their boosters are promising for other areas (without such results outside a few tiny, but sometimes important, niches, so far) when what you're doing is producing throw-away content at enormous scale and have a high tolerance for mistakes, as long as the volume is high.
dare944
11 hours ago
> when what you're doing is producing throw-away content at enormous scale and have a high tolerance for mistakes, as long as the volume is high.
This also describes most modern software development
nickpsecurity
6 hours ago
Robocalls. Almost all that I receive are AI's. It's aggrevating because I'd have enjoyed talking to a person in India or wherever but I get the same AI's which filter or argue with me.
I just bought Robokiller. I habe it set to contacts cuz the AI's were calling me all day.
visarga
15 hours ago
It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing." These problems are largely the result of platform optimization for engagement and SEO competition for attention. This isn't unique to models; even we, humans, when operating without feedback, generate mostly slop. Curation is performed by the environment and the passage of time, which reveals consequences. LLMs taken in isolation from their environment are just as sloppy as brains in a similar situation.
Therefore, the correct attitude to take regarding LLMs is to create ways for them to receive useful feedback on their outputs. When using a coding agent, have the agent work against tests. Scaffold constraints and feedback around it. AlphaZero, for example, had abundant environmental feedback and achieved amazing (superhuman) results. Other Alpha models (for math, coding, etc.) that operated within validation loops reached olympic levels in specific types of problem-solving. The limitation of LLMs is actually a limitation of their incomplete coupling with the external world.
In fact you don't even need a super intelligent agent to make progress, it is sufficient to have copying and competition, evolution shows it can create all life, including us and our culture and technology without a very smart learning algorithm. Instead what it has is plenty of feedback. Intelligence is not in the brain or the LLM, it is in the ecosystem, the society of agents, and the world. Intelligence is the result of having to pay the cost of our execution to continue to exist, a strategy to balance the cost of life.
What I mean by feedback is exploration, when you execute novel actions or actions in novel environment configurations, and observe the outcomes. And adjust, and iterate. So the feedback becomes part of the model, and the model part of the action-feedback process. They co-create each other.
walkabout
15 hours ago
> It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing." These problems are largely the result of platform optimization for engagement and SEO competition for attention.
They didn't create those markets, but they're the markets for which LLMs enhance productivity and capability the best right now, because they're the ones that need the least supervision of input to and output from the LLMs, and they happen to be otherwise well-suited to the kind of work it is, besides.
> This isn't unique to models; even we, humans, when operating without feedback, generate mostly slop.
I don't understand the relevance of this.
> Curation is performed by the environment and the passage of time, which reveals consequences.
It'd say it's revealed by human judgement and eroded by chance, but either way, I still don't get the relevance.
> LLMs taken in isolation from their environment are just as sloppy as brains in a similar situation.
Sure? And clouds are often fluffy. Water is often wet. Relevance?
The rest of this is a description of how we can make LLMs work better, which amounts to more work than required to make LLMs pay off enormously for the purposes I called out, so... are we even in disagreement? I don't disagree that perhaps this will change, and explicitly bound my original claim ("so far") for that reason.
... are you actually demonstrating my point, on purpose, by responding with LLM slop?
visarga
13 hours ago
LLMs can generate slop if used without good feedback or trying to minimize human contribution. But the same LLMs can filter out the dark patterns. They can use search and compare against dozens or hundreds of web pages, which is like the deep research mode outputs. These reports can still contain mistakes, but we can iterate - generate multiple deep reports from different models with different web search tools, and then do comparative analysis once more. There is no reason we should consume raw web full of "spam, scams, propaganda, and astroturfing" today.
throwaway290
11 hours ago
So they can sort of maybe solve the problems they create except some people profit from it and can mass manipulate minds in new exciting ways
econ
14 hours ago
For a good while I joked that I could easily write a bot that makes more interesting conversation than you. The human slop will drown in AI slop. Looks like we wil need to make more of an effort when publishing if not develop our own personality.
pixelpoet
15 hours ago
> It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing."
You should hear HN talk about crypto. If the knife were invented today they'd have a field day calling it the most evil plaything of bandits, etc. Nothing about human nature, of course.
Edit: There it is! Like clockwork.
onlyrealcuzzo
16 hours ago
The signals might be different, but the underlying mechanism is still incredibly efficient, no?
CamperBob2
17 hours ago
Which fields have they completely transformed?
Simultaneously discovering and leveraging the functional nature of language seems like kind of a big deal.
jimbo808
16 hours ago
Can you explain what this means?
CamperBob2
15 hours ago
Given that we can train a transformer model by shoveling large amounts of inert text at it, and then use it to compose original works and solve original problems with the addition of nothing more than generic computing power, we can conclude that there's nothing special about what the human brain does.
All that remains is to come up with a way to integrate short-term experience into long-term memory, and we can call the job of emulating our brains done, at least in principle. Everything after that will amount to detail work.
Marshferm
13 hours ago
If the brain only uses language like a sportscaster explaining post-hoc what the self and others are doing (experimental evidence 2003, empirical proof 2016), then what's special about brains is entirely separate from what language is or appears to be. It's not even like a ticker tape that records trades, it's like a disengaged, arbitrary set of sequences that have nothing to do with what we're doing (and thinking!).
Language is like a disembodied science-fiction narration.
Wegener's Illusion of Conscious Will
https://www.its.caltech.edu/~squartz/wegner2.pdf
Fedorenko's Language and Thought are Not The Same Thing
jimbo808
15 hours ago
> we can conclude that there's nothing special about what the human brain does
...lol. Yikes.
I do not accept your premise. At all.
> use it to compose original works and solve original problems
Which original works and original problems have LLMs solved, exactly? You might find a random article or stealth marketing paper that claims to have solved some novel problem, but if what you're saying were actually true, we'd be flooded with original works and new problems being solved. So where are all these original works?
> All that remains is to come up with a way to integrate short-term experience into long-term memory, and we can call the job of emulating our brains done, at least in principle
What experience do you have that caused you to believe these things?
CamperBob2
15 hours ago
Which is fine, but it's now clear where the burden of proof lies, and IMHO we have transformer-based language models to thank for that.
If anyone still insists on hidden magical components ranging from immortal souls to Penrose's quantum woo, well... let's see what you've got.
emptysongglass
13 hours ago
No, the burden of proof is on you to deliver. You are the claimant, you provide the proof. You made a drive-by assertion with no evidence or even arguments.
I also do not accept your assertion, at all. Humans largely function on the basis of desire-fulfilment, be that eating, fucking, seeking safety, gaining power, or any of the other myriad human activities. Our brains, and the brains of all the animals before us, have evolved for that purpose. For evidence, start with Skinner or the millions of behavioral analysis studies done in that field.
Our thoughts lend themselves to those activities. They arise from desire. Transformers have nothing to do with human cognition because they do not contain the basic chemical building blocks that precede and give rise to human cognition. They are, in fact, stochastic parrots, that can fool others, like yourself, into believing they are somehow thinking.
[1] Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). Brain, 106(3), 623-642.
[2] Soon, C. S., Brass, M., Heinze, H. J., & Haynes, J. D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11(5), 543-545.
[3] Berridge, K. C., & Robinson, T. E. (2003). Parsing reward. Trends in Neurosciences, 26(9), 507-513. (This paper reviews the "wanting" vs. "liking" distinction, where unconscious "wanting" or desire is driven by dopamine).
[4] Kavanagh, D. J., Andrade, J., & May, J. (2005). Elaborated Intrusion theory of desire: a multi-component cognitive model of craving. British Journal of Health Psychology, 10(4), 515-532. (This model proposes that desires begin as unconscious "intrusions" that precede conscious thought and elaboration).
CamperBob2
12 hours ago
If anything, your citation 1, along with subsequent fMRI studies, backs up my point. We literally don't know what we're going to do next. Is that a hallmark of cognition in your book? The rest are simply irrelevant.
They are, in fact, stochastic parrots, that can fool others, like yourself, into believing they are somehow thinking.
What makes you think you're not arguing with one now?
emptysongglass
12 hours ago
How does that back up your point?
You are not making an argument, you are just making assertions without evidence and then telling us the burden of proof is on us to tell you why not.
If you went walking down the streets yelling the world is run by a secret cabal of reptile-people without evidence, you would rightfully be declared insane.
Our feelings and desires largely determine the content of our thoughts and actions. LLMs do not function as such.
Whether I am arguing with a parrot or not has nothing to do with cognition. A parrot being able to usefully fool a human has nothing to do with cognition.
jimbo808
15 hours ago
I had edited my comment, I think you replied before I saved it.
CamperBob2
14 hours ago
I was just saying that it's fine if you don't accept my premise, but that doesn't change the reality of the premise.
The International Math Olympiad qualifies as solving original problems, for example. If you disagree, that's a case you have to make. Transformer models are unquestionably better at math than I am. They are also better at composition, and will soon be better at programming if they aren't already.
Every time a magazine editor is fooled by AI slop, every time an entire subreddit loses the Turing test to somebody's ethically-questionable 'experiment', every time an AI-rendered image wins a contest meant for human artists -- those are original works.
Heck, looking at my Spotify playlist, I'd be amazed if I haven't already been fooled by AI-composed music. If it hasn't happened yet, it will probably happen next week, or maybe next year. Certainly within the next five years.
rhetocj23
13 hours ago
Someones drank too much of the AI-hype-juice. You'll sober up in time.
leptons
14 hours ago
Humans hallucinate too, but there's usually dysfunction, and it's not expected as a normal operational output.
>If anyone still insists on hidden magical components ranging from immortal souls to Penrose's quantum woo, well... let's see what you've got.
This isn't too far off from the marketing and hypesteria surrounding "AI" companies.
rhetocj23
9 hours ago
"Humans hallucinate too"
No they dont. Humans also know when they are pretending to know what they are talking about - put said people against the wall and they will freely admit they have no idea what the buzzwords they are saying mean.
Machines possess no such characteristic.
leptons
9 hours ago
>"Humans hallucinate too"
>No they dont.
WTAF? Maybe you're new here, but the term "hallucinate" came from a very human experience, and was only usurped recently by "AI" bros who wanted to anthropomorphize a tin can.
>Humans also know when they are pretending to know what they are talking about - put said people against the wall and they will freely admit they have no idea what the buzzwords they are saying mean.
>Machines possess no such characteristic.
"AI" will say whatever you want to hear to make you go away. That's the extent of their "characteristic". If it doesn't satisfy the user, they try again, and spit out whatever garbage it calculates should make the user go away. The machine has far less of an "idea" what it's saying.
krispyfi
5 hours ago
Aside: "Hallucinate" was an unfortunate choice of words. It should have been "confabulate". And, yes, humans confabulate, too.
epistasis
17 hours ago
> think the valuable idea is probabilistic graphical models- of which transformers is an example- combining probability with sequences, or with trees and graphs- is likely to continue to be a valuable area for research exploration for the foreseeable future.
As somebody who was a biiiiig user of probabilistic graphical models, and felt kind of left behind in this brave new world of stacked nets, I would love for my prior knowledge and experience to become valuable for a broader set of problem domains. However, I don't see it yet. Hope you are right!
cauliflower2718
15 hours ago
+1, I am also big user of PGMs, and also a big user of transformers, and I don't know what the parent comment talking about, beyond that for e.g. LLMs, sampling the next token can be thought of as sampling from a conditional distribution (of the next token, given previous tokens). However, this connection of using transformers to sample from conditional distributions is about autoregressive generation and training using next-token prediction loss, not about the transformer architecture itself, which mostly seems to be good because it is expressive and scalable (i.e. can be hardware-optimized).
Source: I am a PhD student, this is kinda my wheelhouse
AaronAPU
17 hours ago
I have my own probabilistic hyper-graph model which I have never written down in an article to share. You see people converging on this idea all over if you’re looking for it.
Wish there were more hours in the day.
rbartelme
16 hours ago
Yeah I think this is definitely the future. Recently, I too have spent considerable time on probabilistic hyper-graph models in certain domains of science. Maybe it _is_ the next big thing.
hammock
17 hours ago
> I think the valuable idea is probabilistic graphical models- of which transformers is an example- combining probability with sequences, or with trees and graphs- is likely to continue to be a valuable area
I agree. Causal inference and symbolic reasoning would SUPER juicy nuts to crack , more so than what we got from transformers.
pigeons
14 hours ago
Not doubting in any way, but what are some fields it transformed
eli_gottlieb
14 hours ago
> probabilistic graphical models- of which transformers is an example
Having done my PhD in probabilistic programming... what?
dekhn
11 hours ago
I was talking about things inspired by (for example) hidden markov models. See https://en.wikipedia.org/wiki/Graphical_model
In biology, PGMs were one of the first successful forms of "machine learning"- given a large set of examples, train a graphical model using probabilities using EM, and then pass many more examples through the model for classification. The HMM for proteins is pretty straightforward, basically just a probabilistic extension of using dynamic programming to do string alignment.
My perspective- which is a massive simplification- is that sequence models are a form of graphical model, although the graphs tend to be fairly "linear" and the predictions generate sequences (lists) rather than trees or graphs.
pishpash
11 hours ago
It's got nothing to do with PGM's. However, there is the flavor of describing graph structure by soft edge weights vs. hard/pruned edge connections. It's not that surprising that one does better than the other, and it's a very obvious and classical idea. For a time there were people working on NN structure learning and this is a natural step. I don't think there is any breakthrough here, other than that computation power caught up to make it feasible.
cyanydeez
10 hours ago
Cancer is also fertile. Its more addiction than revolution, im afraid.