AI was not invented, it arrived

22 pointsposted 18 hours ago
by fcpguru

50 Comments

nospice

16 hours ago

I don't think this framing is useful. First, it applies to every scientific advance ever. Shoulders of giants and all that. We still choose to celebrate discovery because without it, fewer people would pursue scientific research.

And second, this article is almost certainly AI-written, so the joke is on us for engaging with it.

visarga

16 hours ago

I think it is AI worded, but ideas come from a real human.

yannyu

16 hours ago

Well, then we should judge the ideas on their own merits. And it's also not a great idea.

It's a shallow, post-hoc, mystic rationalization that ignores all the work in multiple fields that actually converged to get us to this point.

danaris

15 hours ago

...yes?

What AI out there now is coming up with ideas for articles?

realitydrift

17 hours ago

This framing clicks for me, especially the idea that we crossed a threshold by building conditions rather than intentions. One way to see what emerged is not as intelligence per se, but as a new channel for compressing human meaning.

At scale, any compression system faces a tradeoff between entropy and fidelity. As these models absorb more language and feedback, meaning doesn’t just get reproduced, it slowly drifts. Concepts remain locally coherent while losing alignment with their original reference points. That’s why hallucination feels like the wrong diagnosis. The deeper issue is long run semantic stability, not one off mistakes.

The arrival moment wasn’t when the system got smarter, but when it became a dominant mediator of meaning and entropy started accumulating faster than humans could notice.

qlm

16 hours ago

No I'm fairly certain it was invented and that this style of breathless science fiction roleplay will be looked back on as an embarrassing relic of the era.

echelon

16 hours ago

I didn't even read the article and know that the headline is 100% correct.

It's the result of stochastic hill climbing of a vast reservoir of talented people, industry, and science. Each pushing the frontiers year by year, building the infra, building the connective tissue.

We built the collection of requirements that enabled it through human curiosity, random capitalistic process, boredom, etc. It was gaming GPUs for goodness sake that enabled the scale up of the algorithms. You can't get more serendipitous than that. (Perhaps some of the post-WWII/cold war tech even better qualifies for random hill climbing luck. Microwave ovens, MRI machines, etc. etc.)

Machine learning is inevitable in a civilization that has evolved intelligence, industrialization, and computation.

We've passed all the hard steps to this point. Let's see what's next. Hopefully not the great filter.

hnhg

16 hours ago

How is that different from "Compact Discs weren't invented, they arrived"?

echelon

16 hours ago

Point to the single inventor of AI. You're going to have trouble.

Maybe you give it to the authors of a few papers, but even then you'll struggle to capture even a fraction of the necessary preconditions.

The successes also rely on observing the failures and the alternative approaches. Do we throw out their credit as well?

The list would be longer than the human genome paper.

qlm

16 hours ago

Yes and exactly the same thing could be said for the invention of compact discs. You're just describing "history".

throw310822

16 hours ago

CDs are designed to be exactly in the way they are, and you don't get out of them anything more, or different, than what you put in.

Compute and transformers are a substratum, but the stuff that developed on it through training isn't made according to our design.

tim333

11 hours ago

I don't have a problem with the headline but the article is kind of bad.

And the headline is vague enough that you could read many meanings into it.

My take would be going back to Turing, he could see AI in the future was likely and the output of a Turing complete system is kind of a mathematical function - we just need the algorithms and hardware to crank through it which he thought we might have 50 years on but it's taken nearer 75.

The "intelligence did not get installed. It condensed" stuff reads like LLM slop.

tomxor

16 hours ago

> The idea is unsettling because it reframes human agency

Not really, it's called discovery, aka science.

This weird framing is just perpetuating the idea of LLMs being some kind of magic pixie dust. Stop it.

cubefox

16 hours ago

Like magic pixie dust, nobody knows in detail how AI models work. They are not explicitly created like GOFAI or arbitrary software. The machine learning algorithms are explicitly written by humans, but the model in turn is "written" by a machine learning algorithm, in the form of billions of neural network weights.

kreetx

16 hours ago

I think we do know how they work, no? We give a model some input, this travels through the big neural net of probabilities (gotten with training) and then arrives at a result.

Sure, you don't know what the exact constellation of a trained model will be upfront. But similarly you don't know what, e.g, the average age of some group of people is until you compute it.

cubefox

16 hours ago

If it solves a problem, we generally don't know how it did it. We can't just look at its billions of weights and read what they did. They are incomprehensible to us. This is very different from GOFAI, which is just a piece of software whose code can be read and understood.

visarga

16 hours ago

May I point out that we don't know in detail how most code runs? Not talking about assembly, I am talking about edge cases, instabilities, etc. We know the happy path and a bit around it. All complex systems based on code are unpredictable from static code alone.

cubefox

16 hours ago

We know at least quite well how it runs if we look at the code. But we know almost nothing about how a specific AI model works. Looking at the weights is pointless. It's like looking into Beethoven's brain to figure out how it came up with the Moonshine sonata.

user

4 hours ago

[deleted]

littlestymaar

16 hours ago

This applies to pretty much every technology:

When we built nuclear powerplant we had no idea what really mattered for safety or maintenance, or even what day-to-day operations would be like, and we discovered a lot of things as we ran them (which is why we have been able to keep expanding their lifetime much longer than they were planned for).

Same for airplanes: there's tons of empirical knowledge about them, and people are still trying to build better models for why things that works do works the way they do (a former roommate of mine did a PhD on modeling combustion in jet engines, and she told me how much of the details were unknown, despite the technology being widely used for the past 70 years).

By the way, this is the fundamental reason why waterfall often fails, we generally don't understand enough about something before we build it and use it extensively.

cubefox

15 hours ago

GOFAI software ≈ airplane

ML model ≈ bird

tptacek

16 hours ago

If we'd had blogs back in the 1980s, someone would have written a post that sounded just like this, but about databases. People really did talk this way about "databases". There were people who were afraid of them.

raincole

16 hours ago

People said SQL is the "fourth generation language."

Hell, people said Lisp is an "AI programming language."

The lesson here might be that people say unhinged things about the new technology they hype for.

happytoexplain

16 hours ago

This trope is being worn to the point of absurdity. Yes, people don't like things. All throughout history. Sometimes reasonably, sometimes unreasonably.

X is not Y. It's X.

tptacek

16 hours ago

It's not about "like" or "dislike". It's that people are unsettled by new technology that they can't immediately get their heads around. But today, it sounds kind of silly to be unsettled by the concept of a database.

phplovesong

16 hours ago

It decended like a shitstorm, and now we are all covered in it.

myhf

16 hours ago

Small correction: AI was not invented and it did not arrive.

beders

16 hours ago

AI - as a discipline - has been around forever (1956), essentially since the birth of Lisp - both with staggering successes as well as spectacular failures that ushered in 2(3?) so-called AI winters.

The author probably just means LLMs. And that's really all you need to know about the quality of this article.

rdiddly

15 hours ago

Are LLMs intelligent? The question is far from settled, despite widespread discussion to the point of tedium. But this post freely equates the two without any reflection or qualification, not even a footnote. Omitting it avoids the tedium, but also places the post in the realm of the fanciful, which incidentally has a partial Venn-diagram overlap with the realm of marketing. Maybe that wasn't the author's intent, but that's what walked through the open door when this post arrived.

empiko

16 hours ago

I would say it was discovered, not invented. People were messing around with some algorithms, intruiged by their results. Eventually researchers discovered that with using certain training algorithm with certain data can lead to really wonderful outputs. But this is pure empirical discovery.

No AI researcher from 2010 would predict that transformer architecture (if we could send them the description back in time), SGD, and Web crawling could lead to a very coherent and useful LMs.

kreetx

16 hours ago

Yup. LLMs are a big statistical model, where also any sub-part doesn't know the whole. If it's really similar to a brain, I guess we might say we discovered it. But if it isn't, we invented it. The fact that it is so useful doesn't have to mean that "it arrived".

tomrod

16 hours ago

I disagree strongly. AI came from smart engineering and design applied to algorithms developed for intellectual curiosity. It was absolutely invented.

amelius

16 hours ago

Well, intelligence evolved over millions of years without design (assuming you are not religious).

This all happened without anyone even looking for a way to create intelligence.

The biggest step in AI was the invention of the artificial neural network. However, it is still a copy of nature's work, and in fact you could argue that even the inventor is nature's work. So there's a big argument in favor of "it arrived".

tomrod

15 hours ago

Not all AI algorithms are neural networks. So from the get go, you are conflating terms to propose an underspecified and improperly esoteric worldview.

We invented AI. That the structure of a neuron inspired one subsystem architecture framework offers nothing essentialist or sacrosanct to the whole enterprise.

Sticks were our first clubs, but we don't limit our design and engineering for tools or weapons to the nature of trees. We extract good principles and invent the form as well as, often, the function.

qlm

16 hours ago

Everything that has been "invented" was invented by humans and on some level depends on the laws of nature to function.

I recently bought whey protein powder that doesn't come from milk. It was synthesized by human-engineered microbes. Did this invention "arrive"?

kreetx

16 hours ago

"Intelligence" is too vague. Do you mean neural nets in our heads developed in millions of years? Do we know that it is a neural net?

Rikudou

16 hours ago

Nah, I'm pretty sure we invented it. Otherwise I'm not sure what costs all these companies so much money.

Granted, I only managed to read two and half paragraphs before deciding it's not worth my time, but the argument that we didn't teach it irony is bullshit: we did exactly that by feeding it text with irony.

echelon

16 hours ago

Gaming GPUs enabled it. That's random serendipidous connective tissue that was presaged by none of the people who wrote the first papers fifty years ago.

Individual researchers and engineers are pushing forward the field bit by bit, testing and trying, until the right conditions and circumstances emerge to make it obvious. Connections across fields and industries enable it.

Now that the salient has emerged, everyone wants to control it.

Capital battles it out for the chance to monopolize it.

There's a chance that the winner(s) become much bigger than the tech giants of today. Everyone covets owning that.

The battle to become the first multi-trillionaire is why so much money is being spent.

fromMars

16 hours ago

This is a brilliant article and shouldn't be dismissed so quickly.

I think the framing is dead on.

bgwalter

16 hours ago

This gushing article omits the fact that multiple OpenAI researchers were on record saying that they were surprised by the early success of "AI". Of course the development was incremental, slow and unspectacular to insiders.

After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed.

Emergence? Please, just because something has blinkenlights and humming fans does not mean it's intelligent.

throw310822

16 hours ago

Imagine that there's a lot of people who are dismissive even now, when the parrots can write their code or crush them in a philosophical discussion.

bgwalter

16 hours ago

They cannot write my code [1] and a photocopier also "wins" a philosophical discussion if you put Hegel snippets on it.

[1] They steal it though to produce bad imitations.

throw310822

16 hours ago

> a photocopier also "wins" a philosophical discussion if you put Hegel snippets on it.

I don't think so, have you tried?

kreetx

16 hours ago

People disagreeing with the article aren't "dismissing AI". Did you read what it said?

throw310822

16 hours ago

Hey Claude, can you help me categorise the tone/ sentiment of this statement, in three words?

"After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed."

Claude: Cynical, dismissive, condescending.

kreetx

2 hours ago

The original post and the rest of the comment are about invent vs arrive (discover?). I'm sure I'll be able to find (parts of) your comments, too, that diverge in sentiment.

rpdillon

13 hours ago

bgwalter is clearly dismissing AI. The post has all the telltale signs.

* Rather than the curious "What is it good at? What could I use it for? We instead get "It's not better than me!". That lacks insight and is intentionally sidestepping the point that it has utility for a lot of people who need coding work done.

* Using a bad analogy protected by scare quotes to make an invalid point that suggests a human would be able to argue with a photocopier or a philosophical treatise. It's clearly the case that humans can only argue with an LLM, due to the interactive nature of the dialogue.

* The use of the word "steal" to indicate theft of material when training AI models, again intentionally conflating theft with copyright infringement. But even that suggestion is not accurate: Model training is currently considered fair use and court findings were already trending in this direction. So even the suggestion it's copyright infringement doesn't hold water. Piracy of material would invalidate that, but that's not what happening in the case of bgwalters code, I don't expect. I expect bgwalter published their code online and it was scraped.

Agree with the sibling comment, posting Claude's assessment that mirrors this analysis. Dismissive and cynical is a good way to put it.