OpenAI researcher announced GPT-5 math breakthrough that never happened

269 pointsposted 6 hours ago
by Topfi

173 Comments

Timsky

4 hours ago

> GPT-5 is proving useful as a literature review assistant

No, it does not. It only produces a highly convincing counterfeit. I am honestly happy for people who are satisfied with its output: life is way easier for them than for me. Obviously, the machine discriminates me personally. When I spend hours in the library looking for some engineering-related math made in the 70s-80s, as a last resort measure, I can try to play this gambling with chat, hoping for any tiny clue to answer my question. And then for the following hours, I am trying to understand what is wrong with the chat output. Most often, I experience the "it simply can't be" feeling, and I know I am not the only one having it.

cj

a minute ago

[delayed]

crazygringo

3 hours ago

In my experience doing literature super-deep-dives, it hallucinates sources about 50% of the time. (For higher-level literature surveys, it's maybe 5%.)

Of the other 50% that are real, it's often ~evenly split into sources I'm familiar with and sources I'm not.

So it's hugely useful in surfacing papers that I may very well never have found otherwise using e.g. Google Scholar. It's particularly useful in finding relevant work in parallel subfields -- e.g. if you work in physics but it turns out their are math results, or you work in political science and it turns out there are relevant findings from anthropology. And also just obscure stuff -- a random thesis that never got published or cited but the PDF is online and turns out to be relevant.

It doesn't matter if 75% of the results are not useful to me or hallucinated. Those only waste me minutes. The other 25% more than make up for it -- they're things I simply might never find otherwise.

andrewflnr

3 hours ago

So, the exact stuff Google used to be good at.

ramenbytes

an hour ago

The exact stuff I now use Kagi for. Finding obscure relevant PDFs that Google didn't is literally one of the things that made me switch.

georgemcbay

2 hours ago

Pretty much, though Google got bad at these things well before LLMs really came on to the scene, and we can all debate which project manager was responsible and the month and year things took a downward turn, but the IMO obvious catalyst was that "Barely Good Enough" search creates more ad impressions, especially when virtually all of the bad results you are serving are links to sites that also serve Google managed ads.

xiphias2

5 minutes ago

It was a very clear point: when Amit Singhal was kicked out for sexual harassment in the me too era. He was the heart of search quality but he went too far when he was drinking.

andrewflnr

2 hours ago

Oh, sure, Google was starting to take a dive almost a decade before LLMs came on the scene.

macrolime

3 hours ago

What is "it". Gpt-5 auto? Gpt-5 pro? Deep research? These have wildly different hallucination rates.

bathtub365

2 hours ago

If these rates are known it would be great for OpenAI to be open about them so customers can make an informed decision

Maxatar

2 minutes ago

OpenAI has published a great deal of information about hallucination rates, as have the other major LLM providers.

You can't just give one single global hallucination rate since the rates depend on the different use cases and despite the abundant amount of information available to people on how to pick the appropriate tool for a given task, it seems no one cares to take the time to actually first recognize that these LLMs are tools, and that you do need to learn how to use these tools in order to be productive with them.

malfist

an hour ago

"Known" implies that these rates are consistent and measurable. It seems to me, that this is highly unlikely to be the case

scosman

2 hours ago

Saying it isn't useful is a bit of an overstatement. It can search, churn through 500k words in a few minutes, and come back with summaries, answers, and sources for each point.

Should you blindly trust the summary? No. Should you verify key claims by clicking through to the source? Yes. Is it still incredibly useful as a search tool and productivity booster? Absolutely.

scruple

2 hours ago

I gave it a PDF recently and asked it to help me generate some tables based on the information there in. I thought I'd be saving myself time. I spent easily twice as long as I would have if it I had done it myself. It kept making trivial mistakes, misunderstanding what was in the PDF, hallucinating, etc.

Timsky

2 hours ago

It is excellent when just finding something is enough. Most often in my practice, I am dealing with questions that have no written-down answers, meaning the probability of finding a book/article that provides one is negligible. Instead, I am looking for indirect answers or proofs before I make a final engineering decision. Yet another problem is that the language itself changes over time. For instance, at the beginning of the 20th century, the integers were called integral numbers. IMHO, LLMs poorly handle such cases when considered as a substitute for search engines. For full-text vector search, I am using https://www.recoll.org/ a real time saver for me, especially for desktop search.

scosman

an hour ago

> GPT-5 is proving useful as a literature review assistant

> No, it does not.

> It is excellent when just finding something is enough.

Timsky

an hour ago

I meant that it obviously fits your needs but not mine

happy_dog1

2 hours ago

I wonder whether for a lot of the search & literature review-type use-cases where people are trying to use GPT-5 and similar we'd honestly be much better off with a really powerful semantic search engine? Any time you ask a chatbot to summarize the literature for you or answer your question, there's a risk it will hallucinate and give you an unreliable answer. Using LLM-generated embeddings for documents to retrieve the nearest match, by contrast, doesn't run any risk of hallucination and might be a powerful way to retrieve things that Google / Bing etc. wouldn't be able to find using their current algorithms.

I don't know if something like this already exists and I'm just not aware of it to be fair.

Timsky

42 minutes ago

I think you have a very good point here: a semantic search would be the best option for such a search. The items would have unique identifiers so the language variations can be avoided. But unfortunately, I am not aware of any of these kinds of publicly available projects, except DBpedia and some biology-oriented ontologies that would massively analyze scientific reports.

Currently, I am applying RDF/OWL to describe some factual information and contradictions in the scientific literature. On an amateur level. Thus I do it mostly manually. The GPT-discourse somehow brings up not only the human-related perception problems, such as cognitive biases, but also truly philosophical questions of epistemology that should be resolved beforehand. LLM developers cannot solve this because it is not under their control. They can only choose what to learn from. For instance, when we consider a scientific text, it is not an absolute truth but rather a carefully verified and reviewed opinion that is based on the previous authorized opinions and subject to change in the future. So the same author may have various opinions over time. More recent opinions are not necessarily more "truthful" ones. Now imagine a corresponding RDF triple (subject-predicate-object tuple) that describes that. Pretty heavy thing, and no NLTK can decide for us what the truth is and what is not.

andai

2 hours ago

There's this principle, I forget the name, but how everyone when reading the newspaper, when they read on a subject they're familiar with, will instantly spot all the holes, all the errors. And they will ask themselves, how was this even published in the first place?

But then they flip to the next page and they read a story on a subject they're not an expert on and they just accept all of it without question.

I think people might have a similar relationship with ChatGPT.

lukev

2 hours ago

The Gell-Mann Amnesia effect. And you're absolutely right, it's extremely pronounced in LLM users.

malshe

28 minutes ago

I think its scope is narrower than a lit review assistant. I use it mainly for finding papers that I or my RAs might have missed in our lit review.

I have a recent example where it helped me locate a highly relevant paper for my research. It was from an obscure journal and wouldn't show up in the first few pages of Google Scholar search. The paper was real and recently published.

However, using LLMs for doing lit review has been fraught with peril. LLMs often misinterpret the research findings or extrapolate them to make incorrect inferences.

glenstein

3 hours ago

Struggling to understand this one. Is it that (1) it's lopsided toward reference materials found on the modern internet and not as useful for reviewing literature from the Before Times or (2) it's offering specific solutions but you're skeptical of them?

kianN

2 hours ago

If you’re interested in a literature review tool, I built a public one for some friends in grad school that uses hierarchical mixture models to organize bulk searches and citation networks.

Example: https://platform.sturdystatistics.com/deepdive?search_type=e...

Timsky

31 minutes ago

Thank you for sharing! I like your dendrogram-like circular graphs! They are way more intuitive. That could be a nice companion for a bibliometrix/biblioshiny library for bibliometric analysis https://www.bibliometrix.org/. I tried "Deep Dive" with my own request, and ... it unfortunately stops at the end of "Organizing results". Maybe I should try again later.

kianN

23 minutes ago

Haha that’s embarrassing! The progress bars are an estimate. If a paper has a lot of citations, it may take a bit longer than the duration of the bars but it will hopefully finish relatively soon!

Edit: Got home and checked the error logs. There was a very long search query with no results. Bug on my end to not return an error in that case.

gpjt

4 hours ago

To be fair to the OpenAI team, if read in context the situation is at worst ambiguous.

The deleted tweet that the article is about said "GPT-5 just found solutions to 10 (!) previously unsolved Erdös problems, and made progress on 11 others. These have all been open for decades." If it had been posted stand-alone then I would certainly agree that it was misleading, but it was not.

It was a quote-tweet of this: https://x.com/MarkSellke/status/1979226538059931886?t=OigN6t..., where the author is saying he's "pushing further on this".

The "this" in question is what this second tweet is in turn quote-tweeting: https://x.com/SebastienBubeck/status/1977181716457701775?t=T... -- where the author says "gpt5-pro is superhuman at literature search: [...] it just solved Erdos Problem #339 (listed as open in the official database erdosproblems.com/forum/thread/3…) by realizing that it had actually been solved 20 years ago"

So, reading the thread in order, you get

  * SebastienBubeck: "GPT-5 is really good at literature search, it 'solved' an apparently-open problem by finding an existing solution"
  * MarkSellke: "Now it's done ten more"
  * kevinweil: "Look at this cool stuff we've done!"
I think the problem here is the way quote-tweets work -- you only see the quoted post and not anything that it in turn is quoting. Kevin Weil had the two previous quotes in his context when he did his post and didn't consider the fact that readers would only see the first level, so wouldn't have Sebastien Bubek's post in mind when they read his.

That seems like an easy mistake to entirely honestly make, and I think the pile-on is a little unfair.

moefh

3 hours ago

> Kevin Weil had the two previous quotes in his context when he did his post and didn't consider the fact that readers would only see the first level, so wouldn't have Sebastien Bubek's post in mind when they read his.

No, Weil said he himself misunderstood Sellke's post[1].

Note Weil's wording (10 previously unsolved Erdos problems) vs. Sellke's wording (10 Erdos problems that were listed as open).

[1] https://x.com/kevinweil/status/1979270343941591525

card_zero

3 hours ago

So the first guy said "solved [...] by realizing that it had actually been solved 20 years ago", and the second guy said "found solutions to 10 (!) previously unsolved Erdös problems".

Previously unsolved. The context doesn't make that true, does it?

glenstein

3 hours ago

Right, and I would even go a step further and say the context from SebastienBubeck is stretching "solved" past its breaking point by equating literature research with self-bootsrapped problem solving. When it's later characterized as "previously unsolved" it's doubling down on the same equivocation.

Don't get me wrong, effectively surfacing unappreciated research is great and extremely valuable. So there's a real thing here but with the wrong headline attached to it.

Frieren

3 hours ago

> "GPT-5 is really good at literature search, it 'solved' an apparently-open problem by finding an existing solution"

Survivor bias.

I can assure you that GPT-5 fucks up even relatively easy searches. I need to have a very good idea how the results looks like and the ability to test it to be able to use any result from GPT-5.

If I throw the dice 1000 times and post about it each time that I got a double six. Am I the best dice thrower that there is?

zacmps

3 hours ago

For literature search that might be ok. It doesn't need to replace any other tools, and if 1/10 it surfaces something you wouldn't have found otherwise it could be worth the time on the dud attempts.

OtherShrezzing

3 hours ago

Am I correct in thinking this is the 2nd such fumble by a major lab? DeepMind released their “matrix multiplication better than SOTA” paper a few months back, which suggested Gemini had uncovered a new way to optimally multiply two matrices in fewer steps than previously known. Then immediately after their announcement, mathematicians pointed out that their newly discovered SOTA had been in the literature for 30-40 years, and was almost certainly in Gemini’s training set.

jsnell

8 minutes ago

That doesn't match my recollection of the AlphaEvolve release.

Some people just read the "48 multiplications for a 4x4 matrix multiplications" part, and thought they found prior art at that performance or better. But they missed that the supposed prior art had tighter requirements on the contents of the matrix, which meant those algorithms were not usable for implementing a recursive divide and conquer algorithm for much larger matrix multiplications.

Here is a HN poster claiming to be one of the authors rebutting the claim of prior art: https://news.ycombinator.com/item?id=43997136

ummonk

an hour ago

We also had the GPT-5 presentation which featured both incorrect bar charts (likely AI generated) and an incorrect explanation of lift.

ogogmad

34 minutes ago

No, your claim about matrix multiplication is false. Google's new algorithm can be applied recursively to 4x4 block matrices (over the field of complex numbers). This results in an asymptotically faster algorithm for nxn matrix multiplication than Strassen's. Earlier results on 4x4 matrices by Winograd and others did not extend to block matrices..

Google's result has subsequently been generalised: https://arxiv.org/abs/2506.13242

card_zero

2 hours ago

Well, it is important that we have some technology to prevent us from going round in circles by reinventing things, such as search.

glenstein

2 hours ago

It's an interesting type of fumble too, because it's easy to (mistakenly!) read it as "LLM tries and fails to solve problem but thinks it solved it" when really it's being credited with originality for discovering or reiterating solutions already out there in the literature.

It sounds like the content of the solutions themselves are perfectly fine, so it's unfortunate that the headline will leave the impression that these are just more hallucinations. They're not hallucinations, they're not wrong, they're just wrongly assigned credit for existing work. Which, you know, where have we heard that one before? It's like the stylistic "borrowing" from artists, but in research form.

camillomiller

an hour ago

I have some more mirrors for you to try and climb, if you need them.

sbaidon94

8 minutes ago

You would think Open AI employees have a pretty good grasp of their model capabilities, but even if you don’t, you probably always want to be on the cautious side for every claim you see on the internet.

This just seems to be the Open AI culture, which for better or worse has helped foster the AI hype environment we are currently in.

amirhirsch

4 hours ago

The sad truth about this incident is that it reveals that OpenAI does not have a serious effort to actually work on unsolved math problems.

grafmax

4 hours ago

I realized they jumped the shark when they announced the pivots to ads and porn. Markets haven’t caught on yet.

goalieca

3 hours ago

The porn pivot makes perfect sense. Porn is already quite fake and unconvincing and none of that matters.

mrbombastic

3 hours ago

It might not matter as far as profitability is concerned, ethically the second order effects will be very problematic. I am no puritan but the widespread availability of porn has already affected peoples sexual expectations greatly. AI generated porn is going to remove even more guardrails for behavior previously considered deviant, people will view and bring those expectations back to real life.

swat535

2 hours ago

This is the same argument that people used for video games, "rock music" and violent movies.

I would argue that AI generated porn might be more ethical than traditional porn because the risk of the models being abused or trafficked is virtually zero.

malfist

an hour ago

> because the risk of the models being abused or trafficked is virtually zero.

That's not really true. Look at one if the more common uses for AI porn: taking a photo of someone and making them nude.

Deepfake porn exists and it does harm

derektank

an hour ago

The harms associated with someone creating a deep fake of you are real but they're pretty insignificant compared to the harms associated with being sex trafficked or being exposed to an STI or being unable to find traditional employment after working in the industry.

throwaway-0001

5 minutes ago

You couldn’t just photoshop that before ai came out?

What if you get a model that is 99% similar to your “target” - what we do with that?

ummonk

an hour ago

Would you support installing public spy cams in everyone's bedrooms so as to end the demand for human trafficking in porn?

derektank

39 minutes ago

No? And I didn't suggest deepfakes should be legal.

I was just pointing out that when you're talking about the scale of harm caused by the existing sex industry compared to the scale of harm caused by AI generated pornographic imagery, one far outweighs the other.

glenstein

2 hours ago

To perhaps make the same point as you in a different way, I have no issue with "deviancy" but I think it can accelerate the cycle of chasing a sugar high.

grafmax

an hour ago

People spin up ablated models for pennies. You don’t need advanced reasoning for this crap. OpenAI has 8 billion plus in burn. I guess it’s all effectively paying for brand awareness?

chanux

3 hours ago

And there's no escape. The Internet was built for gambling and this.

throwacct

3 hours ago

Unfortunately, the porn pivot might be their path to "profitability".

goalieca

3 hours ago

Global porn industry revenue is 100B. They won’t take 10% of that. Real humans are already selling themselves pretty cheap or free en masse.

zeroonetwothree

4 hours ago

They know where the money is.

raincole

2 hours ago

I think people hugely overestimate how profitable porn (at least "actual" porn) is. Aylo (the owner of Pornhub) makes peanuts compared to Youtube or Disney.

grafmax

4 hours ago

It’s standard practice for VC companies to enshittify after building a moat, relying on user lock-in. What’s remarkable is how quickly they’ve had to shift gears. And with this rapid pivot it’s questionable how large that moat really is.

HarHarVeryFunny

4 hours ago

The porn / sex-chat one is really disappointing. It seems they've given up even pretending that they are trying to do something beneficial for society. This is just a pure society-be-damned money grab.

disgruntledphd2

3 hours ago

They've raised far too much money for those kinda ethics, unfortunately.

bradly

3 hours ago

My hunch is that they don't have a way to stop anything, so they are creating verticals to at least contain porn, medical, higher-ed users.

j_maffe

2 hours ago

Ah... The classic "If we don't do it, someone else will"

Tell that to the thousands of 18 year olds who'll be captured by this predatory service and get AI psychosis

HarHarVeryFunny

3 hours ago

I'm pretty sure that if they didn't deliberately chose to train on sex chat/stories, etc, then the LLM wouldn't be any good at it. The model isn't getting this capability by training on WikiPedia or Reddit.

So, it's not a matter of them not being able to do a good job of preventing the model from doing it, therefore giving up and instead encouraging it to do it (which anyways makes no sense), but rather them having chosen to train the model to do this. OpenAI is targetting porn as one of their profit centers.

derektank

an hour ago

>The model isn't getting this capability by training on WikiPedia or Reddit

I don't know about the former, but the latter absolutely has sexually explicit material that could make the model more likely to generate erotic stories, flirty chats, etc.

HarHarVeryFunny

an hour ago

OK, maybe bad example, but it would be easy to create a classifier to identify stuff like that and omit it from the training data if they wanted to, and now that they are going to be selling this I'd assume they are explicitly seeking out and/or paying for creation of training material of this type.

rowanG077

4 hours ago

How so? I wouldn't put much stock into a roque employee announcing something wrong.

mrbungie

4 hours ago

That's not any employee, its their VP of Science.

amirhirsch

4 hours ago

The people involved are very smart and must know that AI doing novel math is a canary for AGI. A serious effort around solving open problems would not fuck up this kind of announcement.

jebarker

4 hours ago

That’s a non sequitur. They’re a fairly large organization, I’d be amazed if they don’t have multiple research sub-teams pursuing all sorts of different directions.

827a

4 hours ago

This happening the same week as DeepMind’s seemingly legitimate AI-assisted cancer treatment breakthrough is a startlingly bad look for OpenAI.

My boss always used to say “our only policy is, don’t be the reason we need to create a new policy”. I suspect OpenAI is going to have some new public communication policies going forward.

andrewstuart

5 hours ago

Humans hallucinating about AI.

JKCalhoun

4 hours ago

"OpenAI Researcher Hallucinates GPT-5 Math Breakthrough" could be a headline from The Onion.

reaperducer

4 hours ago

"OpenAI Researcher Hallucinates GPT-5 Math Breakthrough" could be a headline from The Onion.

Off topic, but I saw The Onion on sale in the magazine rack of Barnes and Noble last month.

For those who miss when it was a free rag in sidewalk newsstands, and don't want to pony up for a full subscription, this is an option.

antegamisou

4 hours ago

Seriously those headlines are getting DailyMail sensationalism levels of ridiculous.

nicce

4 hours ago

I the old world we would just use the word bullshit.

pera

4 hours ago

Heh stockholders are not hallucinating: They know very well what they are doing.

skeeter2020

3 hours ago

retail investors? no way. The fever-dream may continue for a while but eventually it will end. Meanwhile we don't even know our full exposure to AI. It's going to be ugly and beyond burying gold in my backyard I can't even figure out how to hedge against this monster.

alkyon

3 hours ago

They started believing the very lies they invented.

moffkalast

3 hours ago

"The truth is usually just an excuse for a lack of imagination."

MattGaiser

4 hours ago

Humans "hallucinate" in the AI way constantly, which is why I don't see them as a barrier to LLMs replacing humans in many contexts. It really isn't unusual for a human to make stuff up or be unaware of stuff.

zeknife

4 hours ago

A human being informed of a mistake will usually be able to resolve it and learn something in the process, whereas an LLM is more likely to spiral into nonsense

MattGaiser

4 hours ago

You must know people without egos. Humans are better at correcting their mistakes, but far worse at admitting them.

But yes, as an edge case handler humans still have an edge.

topaz0

4 hours ago

LLMs by contrast love to admit their mistakes and self-flagellate, and then go on to not correct them. Seems like a worse tradeoff.

skeeter2020

3 hours ago

Not when your goal is to create ASI: Artificial Sycophant Intelligence

thaumasiotes

4 hours ago

It's true that the big public-facing chatbots love to admit to mistakes.

It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.

tonyhart7

4 hours ago

and this is why LLM is getting cooked

they feed an internet data into that shit, they basically "told" LLM to behave because surprise surprise, human sometimes can be more nasty

alimw

4 hours ago

You must know better humans than I do.

topaz0

an hour ago

> Humans "hallucinate" in the AI way constantly

This is more and more clearly false. Humans get things wrong certainly, but the manner in which they get things wrong is just not comparable to how the LLMs get things wrong, beyond the most superficial comparison.

skeeter2020

3 hours ago

Do you think the OpenAI human, when informed of their "oopsie" replied "You're right, there is existing evidence that this problem has already been solved. Blah Blah Blah ... and that's why our new model has made a huge breakthrough against previously unsolved math problems!"

zeroonetwothree

4 hours ago

Humans are a bit better at knowing which things are important and doing more research. Also better at being honest when directly pressed. And infinitely better at learning from errors.

(Yes, not everyone, but we do have some mechanisms to judge or encourage)

pas

4 hours ago

it's the same thing with self-driving, if you can make it safer than a good human driver it's enough. but the bar is pretty low with driving (as evidenced by the hundreds of thousands of collisions and deaths and permanent disabilities each year). and rather high in scientific publishing.

lapcat

4 hours ago

> Humans "hallucinate" in the AI way constantly

This claim is ambiguous. The use of the word "Humans" here obscures rather than clarifies the issue. Individual humans typically do not "hallucinate" constantly, especially not on the job. Any individual human who is as bad at their job as an LLM should indeed be replaced, by a more competent individual human, not by an equally incompetent LLM. This was true long before LLMs were invented.

In the movie "Bill and Ted's Excellent Adventure," the titular characters attempt to write a history report by asking questions of random strangers in a convenience store parking lot. This of course is ridiculous and more a reflection of the extreme laziness of Bill and Ted than anything else. Today, the lazy Bill and Ted would ask ChatGPT instead. It's equally ridiculous to defend the wild inaccuracy and hallucinations of LLMs by comparing them to average humans. It's not the job of humans to answer random questions on any subject.

Human subject matter experts are not perfect, but they’re much better than average and don’t hallucinate on their subjects. They also have accountability and paper trails, can be individually discounted for gross misconduct, unlike LLMs.

random9749832

4 hours ago

Best case: Hallucination

Worst case (more probable): Lying

MPSimmons

4 hours ago

Hanlon's Razor

random9749832

4 hours ago

They are expanding into the adult market because they are running out of ideas. I think common sense is enough to decide what is what here.

forgetfulness

4 hours ago

Lying is a stupid way of selling something and making money

reaperducer

4 hours ago

Lying is a stupid way of selling something and making money

Works for Elon.

rixed

4 hours ago

These days AI just obsequiously praise whatever stupid ideas the human throw at them, which encourage humans into hallucinating breakthroughs.

But it's only a matter of time before AI gets better at prompt engineering.

/s?

YesBox

3 hours ago

Wouldn't be surprised if OpenAI employees are being asked to phrase (market) things this way. This is not the first time they claimed GPT-5 "solved" something [1]

[1] https://x.com/SebastienBubeck/status/1970875019803910478

edit: full text

It's becoming increasingly clear that gpt5 can solve MINOR open math problems, those that would require a day/few days of a good PhD student. Ofc it's not a 100% guarantee, eg below gpt5 solves 3/5 optimization conjectures. Imo full impact of this has yet to be internalized...

cedws

4 hours ago

Making such a claim should at the very least require proof that the information wasn’t in the training data.

flkiwi

an hour ago

Thanks for calling that out. You're right to be upset.

jgalt212

5 hours ago

After the circular financing schemes involving hundreds of billions of dollars were uncovered, nothing I read about the AI business and its artificial hype machine surprises me anymore.

llm_nerd

4 hours ago

Yann LeCun's "Hoisted by their own GPTards" is fantastic.

NitpickLawyer

4 hours ago

While Yann is clearly brilliant, and has a deeper understanding of the roots of the filed than many of us mortals, I think he's been on a debbie downer trend lately, and more importantly, some of his public stances have been proven wrong in mere months / years after he made them.

I remember a public talk, where he was on the stage with some young researcher from MS. (I think it was one of the authors of the "sparks of brilliance in gpt4" paper, but not sure).

Anyway, throughout that talk he kept talking above the guy, and didn't seem to listen, even though he obviously didn't try the "raw", "unaligned" model that the folks at MS were talking about.

And he made 2 big claims:

1) LLMs can't do math. He went on to "argue" that LLMs trick you with poetry that sounds good, but is highly subjective, and when tested on hard verifiable problems like math, they fail.

2) LLMs can't plan.

Well, merely one year later, here we are. AIME is saturated (with tool use), gold at IMO, and current agentic uses clearly can plan (and follow up with the plan, re-write parts, finish tasks, etc etc).

So, yeah, I'd take everything any one singular person says with a huge grain of salt. No matter how brilliant said individual is.

Edit: oh, and I forgot another important argument that Yann made at that time:

3) because of the nature of LLMs, errors compound. So the longer you go in a session, the more errors accumulate so they devolve in nonsense.

Again, mere months later the o series of models came out, and basically proved this point moot. Turns out RL + long context mitigate this fairly well. And a year later, we have all SotA models being able to "solve" problems 100k+ tokens deep.

mrbungie

3 hours ago

Pretty sure you can fill a room with serious researchers that at the very least will doubt about 2) being solved with LLMs, especially when talking about formal planning with pure LLMs and without a planning framwork.

PS: So just we're clear: formal planning in AI </> making a coding plan in Cursor.

NitpickLawyer

3 hours ago

> with pure LLMs and without a planning framwork.

Sure, but isn't that moving the goalposts? Why shouldn't we use LLMs + tools if it works? If anything it shows that the early detractors weren't even considering this could work. Yann in particular was skeptical that long-context things can happen in LLMs at all. We now have "agents" that can work a problem for hours, with self context trimming, planning to md files, editing those plans and so on. All of this just works, today. We used to dream about it a year ago.

pessimizer

6 minutes ago

> Why shouldn't we use

So weird that you immediately move the goalposts after accusing somebody of moving the goalposts. Nobody on the planet told you not to use "LLMs + tools if they work." You've moved onto an entirely different discussion with a made-up person.

> All of this just works, today.

Also, it definitely doesn't "just work." It slops around, screws up, reinserts bugs, randomly removes features, ignores instructions, lies, and sometimes you get a lucky result or something close enough that you can fix up. Nothing that should be in production.

Not that they're not very cool and very helpful in a lot of ways. But I've found them more helpful in showing me how they would do something, and getting me so angry that they nerd-snipe me into doing it correctly. I have to admit, 1) however, that sometimes I'm not sure that I'd have gotten there if I hadn't seen it not getting there, and 2) sometimes "doing it correctly" involves dumping the context and telling it almost exactly how I want something implemented.

badsectoracula

2 hours ago

> Sure, but isn't that moving the goalposts? Why shouldn't we use LLMs + tools if it works?

Personally i do not see it like that at all as one is referring to LLMs specifically while the other is referring to LLMs plus a bunch of other stuff around them.

It is like person A claiming that GIF files can be used to play Doom deathmatches, person B responding that, no, a GIF file cannot start a Doom deathmatch, it is fundamentally impossible to do so and person A retorting that since the GIF format has a provision for advancing a frame on user input, a GIF viewer can interpret that input as the user wanting to launch Doom in deathmatch mode - ergo, GIF files can be used to play Doom deathmatches.

NitpickLawyer

2 hours ago

At the end of the day LLM + tools is asking the LLM to create a story with very specific points where "tool calls" are parts of the story, and "tool results" are like characters that provide context. The fact that they can output stories like that, with enough accuracy to make it worthwhile is, IMO, proof that they can "do" whatever we say they can do. They can "do" math by creating a story where a character takes NL and invokes a calculator, and another character provides the actual computation. Cool. It's still the LLM driving the interaction. It's still the LLM creating the story.

badsectoracula

14 minutes ago

I think you have that last part backwards, it is not the LLM driving the interaction, it is the program that uses the LLM to generate the instructions that does the actual driving - that is the bit that makes the LLM start doing things. Though that is just splitting hairs.

The original point was about the capabilities LLMs themselves since the context was about the technology itself, not what you can do by making them part of a larger system that combines LLMs (perhaps more than one) with other tools.

Depending on the use case and context this distinction may or may not matter, e.g. if you are trying to sell the entire system, it probably is not any more important how the individual parts of the system work than what libraries you used to make the software.

However it can be important in other contexts, like evaluating the abilities of LLMs themselves.

For example i have written a script on my PC that my window manager calls to grab whatever text i have selected on whatever application i'm running and passes it to a program i've written in llama.cpp to load Mistral Small with a prompt that makes it check for spelling and grammar mistakes which in turn produces some script-readable input that another script displays in a window.

This, in a way, is an entire system. This system helps me find grammar and spelling mistakes in the text i have selected when i'm writing documents where i care about finding such mistakes. However it is not Mistral Small that has the functionality of finding grammar and spelling mistakes in my selected text, it only provides the part that does the text checking, the rest is done by other external non-LLM pieces. An LLM cannot intercept keystrokes in my computer, it cannot grab my selected text nor can create a window on my desktop, it doesn't even understand these concepts. In a way this can be thought as a limitation from the perspective of the end result i want, but i work around it with the other software i have attached to it.

mrbungie

2 hours ago

> Sure, but isn't that moving the goalposts?

It can be considered as that, sure, but anytime I see Lecun talking about this, he does recognize that you can patch your way around LLMs, the point is that you are going to hit limits eventually anyways. Specific planning benchmarks like Blockworld and the like show that LLMs (with frameworks) hit limits when they're exposed to out-of-distribution problems, and that's a BIG problem.

> We now have "agents" that can work a problem for hours, with self context trimming, planning to md files, editing those plans and so on. All of this just works, today. We used to dream about it a year ago.

I use them everyday but I still woulnd't really let them work for hours in greenfield projects. And we're seeing big vibe coders like Karpathy say the same.

Topfi

3 hours ago

> AIME is saturated (with tool use) [...]

But isn't tool use kinda the crux here?

Correct me if I'm mistaken, but wasn't the argument back then on whether LLMs could solve maths problems without e.g. writing python to solve? Cause when "Sparks of AGI" came out in March, prompting gpt-3.5-turbo to code solutions to assist solving maths problems over just solving them directly was already established and seemed like the path forward. Heck, it is still the way to go, despite major advancements.

Given that, was he truly mistaken on his assertions regarding LLMs solving maths? Same for "planning".

NitpickLawyer

3 hours ago

AIME was saturated with tool use (i.e. 99%) for SotA models, but pure NL, no tool still perform "unreasonably well" on the task. Not 100% but still within 90%. And with lots of compute it can reach 99% as well, apparently [1] (@512 rollouts, but still)

[1] - https://arxiv.org/pdf/2508.15260

goalieca

3 hours ago

> LLMs can't do math. He went on to "argue" that LLMs trick you with poetry that sounds good, but is highly subjective, and when tested on hard verifiable problems like math, they fail.

They really can’t. Token prediction based on context does not reason. You can scramble to submit PRs to ChatGPT to keep up with the “how many Rs in blueberry” kind of problems but it’s clear they can’t even keep up with shitposters on reddit.

And your 2nd and third point about planning and compounding errors remain challenges.. probably unsolvable with LLM approaches.

NitpickLawyer

3 hours ago

> They really can’t. Token prediction based on context does not reason.

Debating about "reasoning" or not is not fruitful, IMO. It's an endless debate that can go anywhere and nowhere in particular. I try to look at results:

https://arxiv.org/pdf/2508.15260

Abstract:

> Large Language Models (LLMs) have shown great potential in reasoning tasks through test-time scaling methods like self-consistency with majority voting. However, this approach often leads to diminishing returns in accuracy and high computational overhead. To address these challenges, we introduce Deep Think with Confidence (DeepConf), a simple yet powerful method that enhances both reasoning efficiency and performance at test time. DeepConf leverages modelinternal confidence signals to dynamically filter out low-quality reasoning traces during or after generation. It requires no additional model training or hyperparameter tuning and can be seamlessly integrated into existing serving frameworks. We evaluate DeepConf across a variety of reasoning tasks and the latest open-source models, including Qwen 3 and GPT-OSS series. Notably, on challenging benchmarks such as AIME 2025, DeepConf@512 achieves up to 99.9% accuracy and reduces generated tokens by up to 84.7% compared to full parallel thinking.

goalieca

2 hours ago

> Debating about "reasoning" or not is not fruitful, IMO.

Thats kind of the whole need isn’t it? Humans can automate simple tasks very effectively and cheaply already. If I ask my pro versions of LLM what the Unicode value of a seahorse is, and it shows a picture of a horse and gives me the Unicode value for a third completely related animal then it’s pretty clear it can’t reason itself out of a wet paper bag.

NitpickLawyer

2 hours ago

Sorry perhaps I worded that poorly. I meant debating about if context stuffing is or isn't "reasoning". At the end of the day, whatever RL + long context does to LLMs seems to provide good results. Reasoning or not :)

goalieca

a few seconds ago

Well that’s my point and what I think the engineers are screaming at the top of their lungs these days.. that it’s net negative. It makes a really good demo but hasn’t won anything except maybe translating and simple graphics generation.

frays

4 hours ago

I might be missing context here, but I'm surprised to see Yann using language that plays on 'retard.'

That seems out of character for him - more like something I'd expect from Elon Musk. What's the context I'm missing?

znkr

4 hours ago

I don’t think it’s a wordplay with the r-word, but rather a reference to the famous Shakespeare quote: “Hoist with his own petard”. It’s become an English proverb. (A petard is a smallish bomb)

card_zero

3 hours ago

From péter, to fart.

Possibly entered the language as a saying due to Shakespeare being scurrilous.

grey-area

4 hours ago

Hoist (thrown in the air) by your own petard (bomb) is a common phrase.

JKCalhoun

4 hours ago

I try not to lose sight of the first time that I heard (some years back) that people were using this new LLM thing for DM'ing ("dungeon mastering", leading) a game of Dungeons and Dragons. I thought, this must be bullshit or some kind of witchcraft.

Definitely not anti-AI here. I think I have been disappointed though, since then, to slowly learn that they're (still) little beyond that.

Still amazing though. And better than a Google search (IMHO).

random9749832

4 hours ago

You are telling me a language model trained on Reddit can't solve novel problems? Shocking.

Edit: we are in peak damage control phase of the hype cycle.

creativeCak3

3 hours ago

Can't wait for the Buble to burst so we can get back to solving real problems (like the fact that we have very little competition in the CPU market right now, AMD is getting way too comfortable...). I do think though that when this bubble bursts it will hurt the machine learning field (which does have people doing practically useful stuff like protein folding) and investors might pull all funding because this generative nonsense(which has no real use beyond generating porn) will taint the entire field, even the parts of it that are actually useful.

kif

5 hours ago

This honestly doesn’t surprise me. We have reached a point where it’s becoming clearer and clearer that AGI is nowhere to be seen, whereas advances in LLM ability to ‘reason’ have slowed down to (almost?) a halt.

dawnerd

4 hours ago

But if you ask an AI hype person they’ll say we’re almost there we just need a bit more gigawatts of compute!

vbezhenar

4 hours ago

In my book, chat-based AGI has been reached years ago, when I couldn't reliably distinguish computer from human.

Solving problems that humanity couldn't solve is super-AGI or something like that. It's not there indeed.

3836293648

4 hours ago

Beating the Turing Test is not AGI, but it is beating the Turing Test and that was impressive enough when it happened

jdiff

4 hours ago

We're not even solving problems that humanity can solve. There's been several times where I've posed to models a geometry problem that was novel but possible for me to solve on my own, but LLMs have fallen flat on executing them every time. I'm no mathematician, these are not complex problems, but they're well beyond any AI, even when guided. Instead, they're left to me, my trusty whiteboard, and a non-negligible amount of manual brute force shuffling of terms until it comes out right.

They're good at the Turing test. But that only marks them as indistinguishable from humans in casual conversation. They are fantastic at that. And a few other things, to be clear. Quick comprehension of an entire codebase for fast queries is horribly useful. But they are a long way from human-level general intelligence.

steveBK123

4 hours ago

Hence the pivot into ads, shop-in-chat and umm.. adult content.

ripped_britches

4 hours ago

I make mistakes all the time. This seems like a genuine mistake, not malice.

Imagine if you were talking about your own work online, you make an honest mistake, then the whole industry roasts you for it.

I’m so tired of hearing everyone take stabs at people at OpenAI just because they don’t personally like sama or something.

strongbond

3 hours ago

Maybe Open AI shouldn't be so stabable?

Analemma_

4 hours ago

“AGI achieved internally”

Another case of culture flowing from the top I guess.

amelius

5 hours ago

> Summary (from the article)

* OpenAI researchers claimed or suggested that GPT-5 had solved unsolved math problems, but in reality, the model only found known results that were unfamiliar to the operator of erdosproblems.com.

* Mathematician Thomas Bloom and Deepmind CEO Demis Hassabis criticized the announcement as misleading, leading the researchers to retract or amend their original claims.

* According to mathematician Terence Tao, AI models like GPT-5 are currently most helpful for speeding up basic research tasks such as literature review, rather than independently solving complex mathematical problems.

HarHarVeryFunny

4 hours ago

> GPT-5 had only surfaced existing research that Bloom had missed

So GPT-5 didn't derive anything itself - it was just an effective search engine for prior research, which is useful, but not any sort of breakthough whatsoever.

phplovesong

4 hours ago

How fing obvious was it that AI slop did not do anything other than scarpe some websites.

Jweb_Guru

4 hours ago

I felt like I was going crazy when people uncritically accepted the original claim from OpenAI. Have people actually used these models?

mentalgear

5 hours ago

Another instance of openAI manipulating results to prolong their unsustainable circular hype bubble.

The inevitable collapse could be even more devastating than the 2008 financial crisis.

All while so vast resources are being wasted on non-verifiable gen AI slob, while real approaches (neuro-symbolic like DeepMind's AlphaFold) are mostly ignored financially because they don't generate the quick stock market increases that hype does.

the_duke

5 hours ago

People keep spouting this, but I don't see how the AI bubble bursting would be all that devastating.

2008 was a systemic breakdown rippling through the foundations of the financial system.

It would lead to a market crash (80% of gains this year were big tech/AI) and likely a full recession in the US, but nothing nearly as dramatic as a global systemic crisis.

In contrast to the dot com bubble, the huge AI spending is also concentrated on relatively few companies, many with deep pockets from other revenue sources (Google, Meta, Microsoft, Oracle), and the others are mostly private companies that won't have massive impact on the stock market.

A sudden stop in AI craze would be hard for hardware companies and a few big AI only startups , but the financial fallout would be much more contained than either dot com or 2008.

jcranmer

3 hours ago

There's a few variables which can make it much worse.

The first is how much of the capital expenditures are being fueled by debt that won't be repaid, and how much that unpaid debt harms lending institutions. This is fundamentally how a few bad debts in 2008 broke the entire financial system: bad loans felled Lehman Brothers, which caused one money market fund to break the buck, which spurred a massive exodus from the money markets rather literally overnight.

The second issue is the psychological impact of 40% of market value just evaporating. A lot of people have indirect exposure to the stock market and these stocks in particular (via 401(k)s or pensions), and seeing that much of their wealth evaporate will definitely have some repercussions on consumer confidence.

Topfi

4 hours ago

Isn't the dot com bubble a far better proxy? Notably, todays spending is both higher and more concentrated in a few companies that a large part of the population has exposure to (most dot com companies weren't publicly traded and far smaller vs MSFT, Alphabet, Meta, Oracle, NVDA making up most investment today) by way of pension funds, ETFs, etc.?

the_duke

4 hours ago

Sure, but all of the above have solid businesses that rake in lots of money, revenue based on AI is a small percentage for them.

An AI bust would take the stock price down a good deal, but the stock gains have been relatively moderate. Year on year: Microsoft +14%, Meta +24%, Google +40, Oracle +60%, ... And a notable chunk of those gains have indirectly come from the dollar devaluing.

Nvidia would be hit much harder of course.

There is a good amount of smaller AI startups, but a lot of the AI development is concentrated on the big dogs, it's not nearly as systemic as in dot com, where a lot of businesses went under completely.

And even with an AI freeze, there is plenty of value and usage there already that will not go away, but will keep expanding (AI chat, AI coding, etc) which will mitigate things.

techblueberry

4 hours ago

I think my theory into contagion would be that There’s been lots of talk about these companies starting to rack up debt, and I think AI is so tied into the US GDP that things like -

If the stock market crashes, there’s lots of talk about how wealth and debt are interlinked. Could the crash be general enough to start calls on debt backed by stocks.

My recollection in 2008 was that we didn’t learn about how bad it was until after. The tech companies have been so desperate for a win, I wonder if some of them are over their skis in some way, and if there are banks that are risking it all on AI. (We know for some tech bros think the bet on AI is a longtermist like bet; closer to religion than reason and that it’s worth risking everything because the payback could be in the hundreds of trillions)

Combine this with the fact that AI is like what - 30% of the US economy? Magnificent 7 are 60%?

What happens if sustainable PE ratios in tech collapse. Does it take out Tesla?

Maybe the contagion is just the impact on the US economy which, classically anyways has been intermingled with everything.

I would bet almost everything that there is some lie at the center of this thing that we aren’t really aware of yet.

SJC_Hacker

3 hours ago

> Combine this with the fact that AI is like what - 30% of the US economy? Magnificent 7 are 60%?

Nowhere close. US GDP is like $30 trillion. Open AI revenue is ~$4 billion. All the other AI companies revenue might amount to $10 billion at most, and that is being generous. $10 billion/ $30 trillaion is not even 1%.

You are forgetting all those "boring" sectors that form the basis of economies like agriculture and energy. They have always been bigger than the tech sector at any point, but they aren't "sexy" because there isn't the potential "exponential growth" that tech companies

the_duke

4 hours ago

It may well be that an AI bubble burst is the tipping point, but I think that tipping point was coming either way.

The US admin has been (almost desperately) trying to prop up markets and an already struggling economy. If it wasn't AI, it could have been another industry.

I think AI is more of a sideshow in this context. The bigger story is the dollar losing its dominant position , money draining out into Gold/Silver/other stock markets, India buying oil from Russia in Yen, a global economy that has for years been propped up by government spending (US/China/Europe/...), large and lasting geopolitical power balance shifts, ...

These things don't happen overnight, and in fact over many years for USD holdings, but the effects will materialize.

Some of the above (dollar devaluation) is actually what the current admin wanted, which I would see as an admission of global shifts. We might see much larger changes to the whole financial system in the coming decades, which will have a lot of effects.

MattGaiser

4 hours ago

> People keep spouting this, but I don't see how the AI bubble bursting would be all that devastating.

Well, an enormous amount of debt is being raised and issued for AI and US economic growth is nearly entirely AI. Crypto bros showed the other day that they were leveraged to the hilt on coins and it wouldn't surprise me if people are the same way on AI. It is pretty heavily tied to the financial system at this point.

Theodores

4 hours ago

When America sneezes, the rest of the world catches a cold. This was said after the OG 1929 crash and I can remember it said in the 80s. Nobody says it anymore.

Due to exorbitant privilege, with the dollar as the only currency that matters, every country that trades with America is swapping goods and services for 'bits of green paper'. Unless buying oil from Russia, these bits of green paper are needed to buy oil. National currencies and the Euro might as well be casino chips, mere proxies for dollars.

Just last week the IMF issued a warning regarding AI stocks and the risk they pose to the global economy if promises are not delivered.

With every hiccup, whether that be the dot com boom, 2008 or the pandemic, the way out is to print more money, with this money going in at the top, for the banks, not the masses. This amounts to devaluation.

When the Ukraine crisis started, the Russian President stopped politely going along with Western capitalism and called the West out for printing too much money during the pandemic. Cut off from SWIFT and with many sanctions, Russia started trading in other currencies with BRICS partners. We are now at a stage of the game where the BRICS countries, of which there are many, already have a backup plan for when the next US financial catastrophe happens. They just won't use the dollar anymore. Note that currently, China doesn't want any dollars making it back to its own economy, since that would cause inflation. So they invest their dollars in Belt and Road initiatives, keeping those green bits of paper safely away from China. They don't even need exports to the USA or Europe since they have a vast home market to develop.

Note that Russia's reserve of dollars and euros was confiscated. They have nothing to lose so they aren't going to come back into the Western financial system.

Hence, you are right. A market crash won't be a global systematic crisis, it just means that Shanghai becomes the financial capital of the world, with no money printing unless it is backed up by mineral, energy or other resources that have tangible value. This won't be great for the collective West, but pretty good for the rest of the world.

the_duke

4 hours ago

I have similar views on many points, see my response to a sibling comment.

I just think that effects of the AI bubble bursting would be at most a symptom or trigger of much larger geopolitical and financial shifts that would happen anyway.

bbor

5 hours ago

This is just tit-for-tat clickbait. The researcher’s wording was a bit unclear for sure, but far from incorrect.

resoluteteeth

5 hours ago

I disagree. There is no way to interpret "GPT-5 just found solutions to 10 (!) previously unsolved Erdos problems" as saying something other than GPT-5 having solved them.

If it just found existing solutions then they obviously weren't "previously unsolved" so the tweet is wrong.

He clearly misunderstood the situation and jumped to the conclusion that GPT-5 had actually solved the problems because that's what he wanted to believe.

That said, the misunderstanding is understandable because the tweet he was responding to said they had been listed as "open", but solving unsolved erdos problems by itself would be such a big deal that he probably should have double checked it.

strangescript

4 hours ago

This entire thing has been pretty disingenuous on both sides of the fence. All the anti-AI (or anti OpenAI) people are doing victory laps, but what GPT-5 Pro did is still very valuable.

1) What good is your open problem set if really its a trivial "google search" away from being solved. Why are they not catching any blame here?

2) These answers still weren't perfectly laid out for the most part. GPT-5 was still doing some cognitive lifting to piece it together.

If a human would have done this by hand it would have made news and instead the narrative would have been inverted to ask serious questions about the validity of some these style problem sets and/or ask the question how many other solutions are out there that just need pieced together from pre-existing research.

But, you know, AI Bad.

lukev

4 hours ago

Framing this question as "AI good" OR "AI bad" is culture-war thinking.

The real problem here is that there's clearly a strong incentive for the big labs to deceive the public (and/or themselves) about the actual scientific and technical capabilities of LLMs. As Karpathy pointed out on the recent Dwarkesh podcast, LLMs are quite terrible at novel problems, but this has become sort of an "Emperor's new clothes" situation where nobody with a financial stake will actually admit that, even though it's common knowledge if you actually work with these things.

And this directly leads to the misallocation of billions of dollars and potentially trillions in economic damage as companies align their 5-year strategies towards capabilities that are (right now) still science fiction.

The truth is at stake.

strangescript

2 hours ago

Except they weren't intentionally trying to deceive anyone. They made the faulty assumption that these problems were non-trivial to solve and didn't think it was simply GPT-5 aggregating solutions in the wild.

lukev

2 hours ago

Knowing what I know about LLMs, from their internal architecture and from extensive experience working with them daily, I would find this kind of result highly surprising and in a clear violation of my mental model of how these things work. And I'm very far from an expert.

If a purported expert in the field can is willing to credulously publish this kind of result, it's not unreasonable to assume that either they're acting in bad faith, or (at best) are high on their own supply regarding what these things can actually do.

Topfi

4 hours ago

> What good is your open problem set if really its a trivial "google search" away from being solved. Why are they not catching any blame here?

They are a community run database, not the sole arbiter and source of this information. We learned the most basic research back in highschool, I'd hope researchers from top institutions now working for one of the biggest frontier labs can do the same prior to making a claim, but microblogging has and continues to be a blight on any accurate information so nothing new there.

> GPT-5 was still doing some cognitive lifting to piece it together.

Cognitive lifting? It's a model, not a person, but besides that fact, this was already published literature. Handy that a LLM can be a slightly better search, but calling claims of "solving maths problems" out as irresponsible and inaccurate is the only right choice in this case.

> If a human would have done this by hand it would have made news [...]

"Researcher does basic literature review" isn't news in this or any other scenario. If we did a press release every journal club, there wouldn't be enough time to print a single page advert.

> [...] how many other solutions are out there that just need pieced together from pre-existing research [...]

I am not certain you actually looked into the model output or why this was such an embarrassment.

> But, you know, AI Bad.

AI hype very bad. AI anthropomorphism even worse.

puttycat

4 hours ago

This is a strawman argument. No anti-AI sentiment was involved here. Simply the fact that finding and matching text on the Internet is several orders of magnitude easier than finding novel solutions to hard math problems.

strangescript

2 hours ago

You didn't read the X replies if you believe that

nurettin

4 hours ago

AI great, but AI not creative, yet.

matsemann

4 hours ago

You're moving the goal post.

andrepd

4 hours ago

> 1) What good is your open problem set if really its a trivial "google search" away from being solved. Why are they not catching any blame here?

Please explain how this is in any way related to the matter at hand. What is the relation between the incompleteness of an math problem database, and AI hypesters lying about the capabilities of GPT5? I fail to see the relevance.

> If a human would have done this by hand it would have made news

If someone updated information on an obscure math problem aggregator database this would be news?? Again, I fail to see your point here.

Palmik

4 hours ago

The original tweet was clearly misunderstood...

https://x.com/SebastienBubeck/status/1977181716457701775:

> gpt5-pro is superhuman at literature search:

> it just solved Erdos Problem #339 (listed as open in the official database https://erdosproblems.com/forum/thread/339) by realizing that it had actually been solved 20 years ago

https://x.com/MarkSellke/status/1979226538059931886:

> Update: Mehtaab and I pushed further on this. Using thousands of GPT5 queries, we found solutions to 10 Erdős problems that were listed as open: 223, 339, 494, 515, 621, 822, 883 (part 2/2), 903, 1043, 1079.

It's clearly talking about finding existing solutions to "open" problems.

The main mistake is by Kevin Weil, OpenAI CTO, who misunderstood the tweet:

https://x.com/kevinweil/status/1979270343941591525:

> you are totally right—I actually misunderstood @MarkSellke's original post, embarrassingly enough. Still very cool, but not the right words. Will delete this since I can't edit it any longer I think.

Obviously embarassing, but completely overblown reaction. Just another way for people to dunk on OpenAI :)

Topfi

4 hours ago

If holding the CTO of OpenAI accountable for his wildly inaccurate statement constitutes "dunking on OpenAI", then I'd say dunk away.

He, more than anyone else, should be able to for one parse the original statements correctly and for another maybe realize that if one of their models had accomplished what he seemed to think GPT-5 had, that may require some more scrutiny and research before posting it. That would have, after all, been a clear and incredibly massive development for the space, something the CTO of OpenAI should recognize instantly.

The amount of people that told me this is clear and indisputable proof that AGI/ASI/whatever is either around the corner or already here is far more than zero and arguing against their misunderstanding was made all the more challenging because "the CTO of OpenAI knows more than you" is quite a solid appeal to authority.

I'd recommend maybe a waiting period of 48h before any authority in any field can send a tweet, that might resolve some of the inaccuracies and the incredibly annoying need to just jump on wild bandwagons...

zozbot234

4 hours ago

"you are totally right—I actually misunderstood" ...like, seriously? Did an AI come up with this retraction, or are humans actually talking like robots now?

Topfi

3 hours ago

Guess even the CTO of OpenAI relies on Anthropic models in a pinch...