dns_snek
15 hours ago
If actions by these bad actors accelerate the rate at which people lose trust in these systems and lead to the AI bubble popping faster then they have my full support. The entire space is just bad actors complaining about other bad actors while they're collectively ruining the web for everyone, each in their own way.
imiric
14 hours ago
Before the bubble does pop, which I think is inevitable, there will be many stories like this one, and a lot of people will be scammed, manipulated, and harmed. It might take years until the general consensus is negative about the effects of these tools. All while the wealthy and powerful continue to reap the benefits, while those on slightly lower rungs fight to take their place. And even if the public perception shifts, the power might be so concentrated that it could be impossible to dislodge it without violent means.
What a glorious future we've built.
autoexec
13 hours ago
> It might take years until the general consensus is negative about the effects of these tools.
The only thing I'm seeing offline are people who already think AI is trash, untrustworthy, and harmful, while also occasionally being convenient when the stakes are extremely low (random search results mostly) or as a fun toy ("Look I'm a ghibli character!")
I don't think it'll take long for the masses to sour to AI and the more aggressively it's pushed on them by companies, or the more it negatively impacts their life when someone they depend on and should know better uses it and it screws up the quicker that'll happen.
gdbsjjdn
9 hours ago
I work in Customer Success so I have to screenshare with a decent number of engineers working for customers - startups and BigCos.
The number of them who just blindly put shit into an AI prompt is incredible. I don't know if they were better engineers before LLMs? But I just watch them blindly pass flags that don't exist to CLIs and then throw their hands up. I can't imagine it's faster than a (non-LLM) Google search or using the -h flag, but they just turn their brains off.
An underrated concern (IMO) is the impact of COVID on cognition. I think a lot of people who got sick have gotten more tired and find this kind of work more challenging than they used to. Maybe they have a harder time "getting in the zone".
Personally, I still struggle with Long COVID symptoms. This includes brain fog and difficulty focusing. Before the pandemic I would say I was in the top 10% of engineers for my narrow slice of expertise - always getting exceptional perf reviews, never had trouble moving roles and picking up new technologies. Nowadays I find it much harder to get started in the morning, and I have to take more breaks during the day to reset my focus. At 5PM I'm exhausted and I can't keep pushing solving a problem into the evening.
I can see how the same kind of cognitive fatigue would make LLM "assistance" appealing, even if it's wrong, because it's so much less work.
bluefirebrand
6 hours ago
Reading this, I'm wondering if I'm suffering "Long Covid"
I've recently had tons of memory and brain fog. I thought it was related to stress, and it's severe enough that I'm on medical leave from work right now
My memory is absolutely terrible
Do you know if it is possible to test or verify if it's COVID related?
Lu2025
8 hours ago
> An underrated concern (IMO) is the impact of COVID on cognition
Car accidents came down from the Covid uptick but only slightly. Aviation... ugh. And there is some evidence it accelerates Altzheimer's and other dementias. We are so screwed.
tokioyoyo
10 hours ago
Counter data point — my surroundings use ChatGPT basically for anything and say it’s good enough.
glotzerhotze
9 hours ago
Same here, people use it like google for searching answers. It‘s a shortcut for them to not have to screen results and reason about them.
amalcon
8 hours ago
This is precisely the problem: users still need to screen and reason about results of LLMs. I am not sure what is generating this implied permission structure, but it does seem to exist.
(I don't mean to imply that parent doesn't know this, it just seems worth saying explicitly)
tokioyoyo
5 hours ago
It’s only a problem for people who care about its precision. If it’s right about 80-90% of stuff, it’s good enough.
Lu2025
8 hours ago
> say it’s good enough
How do they know?
tokioyoyo
5 hours ago
Doesn’t matter. If they feel “good enough” that’s already “good enough”. Super majority of the world doesn’t revolve around truth seeking, fact -checking or curiosity.
intended
9 hours ago
The things I have noted offline included a HK case where someone got a link to a zoom call with what seemed to be his team mates and CFO, and then transferring money as per the CFOs instructions.
The error here was to click on a phishing email.
But something I have seen myself is Tim Cook talking about a crypto coin right after the 2024 Apple keynote, on a YT channel that showed the Apple logo. It took me a bit to realize and reassure myself that it was a scam. Even though it was a video of the shoulders up.
The bigger issue we face isn’t the outright fraud and scamming, it’s that our ability to make out fakes easily is weakened - the Liar’s dividend.
It’s by default a shot in the arm for bullshit and lies.
On some days I wonder if the inability to sort between lies, misinformation, initial ideas, fair debate, argument, theory and fact at scale - is the great filter.
_Algernon_
12 hours ago
We got the boring version of the cyberpunk future. No cool body mods, neon city scapes and space travel. Just megacorps manipulating the masses to their benefit.
filoeleven
8 hours ago
The cool body mods are coming!
The work at the Levin Lab ( https://drmichaellevin.org/ ) is making great progress in the basic science that supports this. They can make two-headed planaria, regenerate frog limbs, cure cancer in tadpoles; all via bioelectric communication with cellular networks. No gene editing.
Levin believes this stuff will be very available to humans within the next 10 years, and has talked about how widespread body-modding is something we're going to have to wrestle with societally. He is of course very close to the work, but his cautious nature and the lab's astounding results give that 10-year prediction some weight. From his blog:
> We were all born into physical and mental limitations that were set at arbitrary levels by chance and genetics. Even those who have “perfect” standard human health and capabilities are limited by anatomical decisions that were not made with anyone’s well-being or fulfillment in mind. I consider it to be a core right of sentient beings to (if they wish) move beyond the involuntary vagaries of their birth and alter their form and function in whatever way suits their personal goals and potential.- Copied from https://thoughtforms.life/faqs-from-my-academic-work/
Terr_
3 hours ago
> cellular networks
I often like to point out--satisfying a contrarian streak--that our original human equipment is literally the most mind-bogglingly complicated nanotechnology beyond our understanding, packed with dozens of incredible features we cannot imitate with circuits or chrome.
So as much as I like the aesthetics of cyberpunk metal arms, keeping our OEM parts is better. If we need metal bodies at a construction site, let them be remote-controlled bodies that stay there for the next shift to use.
tclancy
12 hours ago
In retrospect, it should have been obvious. I guess I should have known it would all be more Repo Man than Blade Runner. I just didn’t imagine so many people cheering for the non-Wolverines side in Red Dawn.
(Now I want to change the Blade Runner reference to something with Harry Dean Stanton in it just for consistency)
user
5 hours ago
Lu2025
8 hours ago
Oh well, at least the futuristic sunglasses are back in fashion.
LilBytes
13 hours ago
The tragic part of fraud is it's not too different to operational health and safety.
The rules and standards we take for granted were built with blood, for fraud? It's built on the path of lost livelihoods and manipulated gold intent.
pyman
13 hours ago
How do you know this is fraud and not the actions of former employees in Kenya [1] who were exploited [2] to train the models?
[1] https://www.cbsnews.com/amp/news/ai-work-kenya-exploitation-...
[2] https://www.theguardian.com/technology/2023/aug/02/ai-chatbo...
thunky
9 hours ago
> Before the bubble does pop, which I think is inevitable
Curious what you think a popping bubble looks like?
A stock market crash and recession, where innocent bystanders lose their retirements? Or only AI speculators taking the brunt of the losses?
Will Google, Meta, etc stop investing in AI because nobody uses it post-crash? Or will it be just as prevalent (or more) than today but with profits concentrated in the winning/surviving companies?
imiric
5 hours ago
We've seen this before in 1983 and 2000. Many companies will fold, and those that don't will take a substantial hit. The public sentiment about "AI" will sour, but after that a new breed of more practical tools will emerge under different and more fairly marketed branding.
I do think that the industry and this technology will survive, and we'll enjoy many good applications of it, but it will take a few more years of hype and grifting to get there.
Unless, of course, I'm entirely wrong and their predicted AI 2027 timeline[1] comes to pass, and we have ASI by the end of the decade, in which case the world will be much different. But I'm firmly in the skeptical camp about this, as it seems like another product of the hype machine.
[1]: I just took a closer look at ai-2027.com and here's their prediction for 2029 in the conservative scenario:
> Robots become commonplace. But also fusion power, quantum computers, and cures for many diseases. Peter Thiel finally gets his flying car. Cities become clean and safe. Even in developing countries, poverty becomes a thing of the past, thanks to UBI and foreign aid.
Yeah, these people are full of shit.
thunky
2 hours ago
> We've seen this before in 1983 and 2000. Many companies will fold, and those that don't will take a substantial hit.
Makes sense, but if the negative effect of the bubble popping is largely limited to AI startups and speculators, while the rest of us keep enjoying the benefits of it, then I don't see why the average person should be too concerned about a bubble.
In 2000, cab drivers were recommending tech stocks. I don't see this kindof thing happening today.
> Yeah, these people are full of shit.
I think it's fair to keep LLMs and AGI seperate when we're talking about "AI". LLMs can make a huge impact even if AGI never happens. We're already seeing now it imo.
AI 2027 says:
- Early 2026: Coding Automation
- Late 2026: AI Takes Some Jobs
These things are already happening today without AGI.k__
12 hours ago
But people were also hating about media piracy, video games, and the internet in general.
The dotcom bubble popped, but the general consensus didn't become negative.
imiric
6 hours ago
Sure. I was referring more to the general consensus about products from companies that are currently riding the AI hype train, not about machine learning in general.
When the dot-com bubble burst in 2000, and after the video game crash in 1983, most of the companies within the bubble folded, and those that didn't took a large hit and barely managed to survive. If the technology has genuine use cases then the market can recover, but it takes a while to earn back the trust from consumers, and the products after the crash are much more practical and are marketed more fairly.
So I do think that machine learning has many potentially revolutionary applications, but we're currently still high around the Peak of Inflated Expectations. After the bubble pops, the Plateau of Productivity will showcase the applications with actual value and benefit to humanity. I just hope we get there sooner rather than later.
morngn
11 hours ago
The bubble won’t pop on anything that’s correlated with scammers. Exhibit A: bitcoin. The problem is not one of public knowledge or will of the people, it’s congress being irresponsible because it’s captured by the 2 parties. You can’t politicize scamming in a way that benefits either party so nothing happens. And the scammers themselves may be big donors (eg SBF’s ties to the dem party, certain ai players purchase of Trump’s favor with respect to their business interests, etc). Scammers all the way down.
imiric
5 hours ago
Good point. I suppose that if grifters can get in positions of power, then the bubble can just keep growing.
Though cryptocurrencies are slightly different because of how they work. They're inherently decentralized, so even though there have been many smaller bubble pops along the way (Mt. Gox, FTX, NFTs, every shitcoin rug pull, etc.), inevitably more will appear with different promises, attracting others interested in potential riches.
I don't think the technology as a whole will ever burst, particularly because I do think there are valid and useful applications of it. Bitcoin in particular is here to stay. It will just keep attracting grifters and victims, just like any other mainstream technology.
QuantumGood
3 hours ago
The "accelerate the end times" argument was probably made most famously by Charles Manson. The "side" effects from supporting bad actions are not good. Presumably you are being 51% or more facetious, but probably more nuance is preferable.
azan_
13 hours ago
> at which people lose trust in these systems
Most of people do not lose trust in system as long as it confirms their biases (which they could've created in the first place).
0x0203
12 hours ago
It's mostly bad actors, and a smattering of optimists who believe that despite its current problems, AI will eventually and inevitably get better. I also wish the whole thing would calm down and come back to reality, but I don't think it's a bubble that will pop. It will continue to get artificially puffed up for a while because too many businesses and people have invested too much for them to just quit (sunk cost falacy) and there's a big enough market in a certain class of writer/developer/etc... for which the short term benefits will justify the continued existence of the AI products for a while. My prediction is that as the long term benefits for honest users peter out, the bubble won't pop, but deflate into a wrinkled 10 day old helium balloon. There will still be a big enough market driven by cons, ad tech and people trying to suck up as many ad dollars as possible, and other bad actors, that the tech will persist, and continue to infest the web/world for quite a while.
AI is the new crypto. Lots of promise and big ideas, lots of people with blind faith about what it will one day become, a lot of people gaming the system for quick gains at the expense of others. But it never actually becomes what it pretends/promises to be and is filled with people continuing the grift trying to make a buck off the next guy. AI just has better marketing and more corporate buy in than crypto. But neither are going anywhere.
contrast
12 hours ago
“the bubble won't pop, but deflate into a wrinkled 10 day old helium balloon”
Love it :)
tempodox
11 hours ago
> AI is the new crypto.
But it's also way worse than cryptocurrencies, because all the big actors are pushing it relentlessly, with every marketing trick they know. They have to, because they invested insane amounts of money into snake oil and now they have to sell it in order to recover at least a fraction of their investments. And the amounts of energy wasted on this ultimately pointless performance are beyond staggering.
wiseowise
11 hours ago
In what parallel universe do you live where LLMs are snake oil?
const_cast
36 minutes ago
It depends on how they're marketed and what's the prevailing opinion. If we believe LLMs are true, genuine intelligence then yes, I'd say that's snake oil.
4ndrewl
10 hours ago
Not LLMs per-se but the wrap-around claims peddled by "AI" companies.
morngn
10 hours ago
I think he was being metaphorical.
throwawayqqq11
10 hours ago
From a classists perspective, big capital cant drop the AI ball, because its their only shot at becoming independent from human labor, those pesky humans their wealth unfortnunately depends uppon and that could democratically seize it in an instant.
I bet there are billionare geniuses out there seeing a future island life far away from the contaminated continents, sustained by robots. So no matter how much harder AI progress gets, money will keep flowing.
hnlmorg
13 hours ago
If that outcome were likely, then Fox News and The Daily Mail would have died a death a decade ago and Trump wouldn’t be serving a 2nd term.
Yet here we are, in a world where it doesn’t matter if “facts” are truth or lies, just as long as your target audience agrees with the sentiment.
dilawar
13 hours ago
Tobacco, alcohol, and drugs too!
7bit
14 hours ago
Thats naive. Look at all the tabloids thriving. The kind of people that bad actors target will continue to believe everything it says. They won't lose trust, or magazines like New York Post, the Sun or BILD would already have crossed to exist with their lies and deception. And Russia would not have so many cult members believing the lies they spread.
moomin
9 hours ago
The thing is: who benefits from a loss of trust in systems? The answer, inevitably, is those for whom the system was a problem. The fewer places people can trust for accurate information, the more disinformation wins.
cyanydeez
7 hours ago
If you use the USA Republicans as a benchmark and fox news as the bad actors, there's perpetual faith that facts wont matter. Just keep confirming biases and foreshadow upcoming pivots to choose your own delusions.
3cats-in-a-coat
13 hours ago
AIs can be trained to rely more on critical thinking rather than just regurgitating what it reads. The problem is just like with people, critical thinking takes more power and time. So we avoid it as much as possible.
In fact, optimizing for the wrong things like that, is basically the entire world's problem right now.
bregma
12 hours ago
Regurgitating its input is the only thing it does. It does not do any thinking, let alone critical thinking. It may give the illusion of thinking because it's been trained on thoughts. That's it.
impossiblefork
9 hours ago
Yes, but the regurgitation can be thought of as memory.
Let it have more source information. Let it know who said the things it reads, let it know on what website it was published.
Then you can say 'Hallucinate comments like those by impossibleFork on news.ycombinator.com', and when the model knows what comes from where, maybe it can learn what users are reliable by which they should imitate to answer questions well. Strengthen the role of metadata during pretraining.
I have no reason to belive it'll work, I haven't tried it and usually details are incredibly important when do things with machine learning, but maybe you could even have critical phases during pretraining where you try to prune away behaviours that aren't useful for figuring out the answers to the questions you have in your high curated golden datasets. Then models could throw away a lot of lies and bullshit, except that which happens to be on particularly LLM-pedagogical maths websites.
3cats-in-a-coat
12 hours ago
[flagged]
anal_reactor
13 hours ago
This whole attitude against AI reminds me of my parents being upset that the internet changed the way they live. They refused to take part in the internet revolution, and now they're surprised that they don't know how to navigate the web. I think that a part of them is still waiting for computers in general to magically disappear, and everything return to the times of their youth.
agentcoops
12 hours ago
Indeed — however it’s interesting that unlike the internet, computers or smartphones the older generation, like the younger, immediately found the use of GPT. This is reflected in the latest Mary Meeker report where it’s apparent that the /organic/ growth of AI use is unparalleled in the history of technology [1]. In my experience with my own parents’ use, GPT is the first time the older generation has found an intuitive interface to digital computers.
I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted. Marcus et al can keep screaming into their echo chamber and it won’t change a thing.
[1] https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
wiseowise
11 hours ago
> I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted.
Where else would AI haters find an echo chamber that proves their point?
agentcoops
11 hours ago
It's wild -- I've never seen such a persistent split in the Hacker News audience like this one. The skeptics read one set of AI articles, everyone else the others; a similar comment will be praised in one thread and down-voted to oblivion in another.
throwawayqqq11
10 hours ago
IMO the split is between people understanding the heuristic nature of AI and people who dont and thus think of it as an all-knowing, all-solving oracle. Your elder parents having nice conversations with chatgpt is nice aslong it doesnt make big life changing decisions for them, which happens already today.
You have to know the tools limits and usecases.
agentcoops
9 hours ago
I can’t see that proposed division as anything but a straw-man. You would be hard-pressed to find anyone who genuinely thinks of LLMs as an “all-knowing, all-solving oracle” and yet, even in specialist fields, their utility is certainly more than a mere “heuristic”, which of course isn’t to say they don’t have limits. See only Terrance Tao’s reports on his ongoing experiments.
Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude? I spoke with a handyman the other day who unprompted told me he was building a side-business and found GPT a great aid — of course they might make some terrible decisions together, but it’s unimaginable to me that increasing agency isn’t a good thing. The interesting question at this stage isn’t just about “elder parents having nice conversations”, but about computers actually becoming useful for the general population through an intuitive natural language interface. I think that’s a pretty sober assessment of where we’re at today not hyperbole. Even as an experienced engineer and researcher myself, LLMs continue to transform how I interact with computers.
johneth
2 hours ago
> Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude?
Depending on the decision yes. An LLM might confidently hallucinate incorrect information and misinform, which is worse than simply not knowing.
daveguy
9 hours ago
Yup. Exactly this. As soon as enough people get screwed by the ~80% accuracy rate, the whole facade will crumble. Unless AI companies manage to bring the accuracy up 20% in the next year, by either limiting scope or finding new methods, it will crumble. That kind of accuracy gain isn't happening with LLMs alone (ie foundational models).
agentcoops
9 hours ago
Charitably, I don’t understand what those like you mean by the “whole facade” and why you use these old machine learning metrics like “accuracy rate” to assess what’s going on. Facade implies that the unprecedented and still exponential organic uptake of GPT (again see the actual data I linked earlier from Mary Meeker) is just a hype-generated fad, rather than people finding it actually useful to whatever end. Indeed, the main issue with the “facade” argument is that it’s actually what dominates the media (Marcus et al) much more than any hyperbolic pro-AI “hype.”
This “80-20” framing, moreover, implies we’re just trying to asymptotically optimize a classification model or some information retrieval system… If you’ve worked with LLMs daily on hard problems (non-trivial programming and scholarly research, for example), the progress over even just the last year is phenomenal — and even with the presently existing models I find most problems arise from failures of context management and the integration of LLMs with IR systems.
daveguy
6 hours ago
Time will tell.
12345hn6789
2 hours ago
My team has measurably gotten our LLM feature to have ~94% accuracy in widespread reliable tests. Seems fairly confident, speaking as an SWE not a DS orML engineer though.
sevensor
6 hours ago
I think of the two camps like this: one group sees a lot of value in llms. They post about how they use them, what their techniques and workflows look like, the vast array of new technologies springing up around them. And then there’s the other camp. Reading the article, scratching their heads, and not understanding what this could realistically do to benefit them. It’s unprecedented in intensity perhaps, but it’s not unlike the Rails and Haskell camps we had here about a dozen years ago.
anal_reactor
9 hours ago
I think there are two problems:
1. AI is a genuine threat to lots of white-collar jobs, and people instinctively deny this reality. See that very few articles here are "I found a nice use case for AI", most of them are "I found a use case where AI doesn't work (yet)". Does it sound like tech enthusiasts? Or rather people terrified of tech?
2. Current AI is advanced enough to have us ask deeper questions about consciousness and intelligence. Some answers might be very uncomfortable and threaten the social contract, hence the denial.
agentcoops
8 hours ago
On the second point, it’s worth noting how many of the most vocal and well-positioned critics of LLMs (Marcus/Pinker in particular) represent the still academically dominant but now known to be losing side of the debate over connectionism. The anthology from the 90s Talking Nets is phenomenal to see how institutionally marginalized figures like Hinton were until very recently.
Off-topic, but I couldn’t find your contact info and just saw your now closed polyglot submission from last year. Look into technical sales/solution architecture roles at high growth US startups expanding into the EU. Often these companies hire one or two non-technical native speakers per EU country/region, but only have a handful of SAs from a hub office so language skills are of much more use. Given your interest in the topic, check out OpenAI and Anthropic in particular.
anal_reactor
6 hours ago
Thanks for the advice. Currently I have a €100k job where I sit and do nothing. I'm wondering if I should coast while it lasts, or find something more meaningful
grishka
13 hours ago
The internet actually enabled us to do new things. AI is nothing of that sort. It just generates mediocre statistically-plausible text.
suspended_state
13 hours ago
In the early days of the web, there wasn't much we could do with it other than making silly pages with blinking texts or under construction animated GIFs. You need to give it some time before judging a new technology.
Disposal8433
12 hours ago
We don't remember the same internet. For the first time in our lives we could communicate by email with people from all over the world. Anyone could have a page to show what they were doing with pictures and text. We had access to photos and videos of art, museum, cities, lifestyles that we could not get anywhere else. And as a non-English guy I got access to millions of lines of written text and audio to actually improve my English.
It was a whole new world that may have changed my life forever. ChatGPT is a shitty Google replacement in comparison, and it's a bad alternative due to being censored in its main instructions.
grishka
8 hours ago
In the early web, there already were forums. There were chats. There were news websites. There were online stores. There were company websites with useful information. Many of these were there pretty much from the beginning. In the 90s, no one questioned the utility of the internet. Some people were just too lazy to learn how to use a computer or couldn't afford one.
LLMs in their current form have existed since what, 2021? That's 4 years already. They have hundreds of millions of active users. The only improvements we've seen so far were very much iterative ones — more of the same. Larger contexts, thinking tokens, multimodality, all that stuff. But the core concept is still the same, a very computationally expensive, very large neural network that predicts the next token of a text given a sequence of tokens. How much more time do we have to give this technology before we could judge it?
andrepd
13 hours ago
The internet predates the Web; people were playing Muds and chatting on message boards before the first browser was made at CERN.
suspended_state
13 hours ago
Of course, but does it mean that my argument is flawed? You're just shifting the discourse, without disproving anything. Do you claim that the web was useful for everyone on day one, or as useful as it is today for everyone?
I could just do the same as GP, and qualify MUDs and BBS as poor proxies for social interactions that are much more elaborate and vibrant in person.
andrepd
9 hours ago
As I pointed out in a different comment, the Internet at least was (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.
But LLMs are from the get-go a bad idea, a bullshit generating machine.
suspended_state
6 hours ago
> [...] LLMs are from the get-go a bad idea, a bullshit generating machine.
Is that a matter of opinion, or a fact (in which case you should be able to back it up)?
andrepd
2 hours ago
For real? x) Of course it's my opinion, what are your own comments about "silly gifs" and "useless early internet" if not an opinion?Seriously...
wiseowise
11 hours ago
Delusional take.
I’m not even heavily invested into AI, just a casual user, and it drastically cut amount of bullshit that I have to deal with in modern computing landscape.
Search, summarization, automation. All of this drastically improved with the most superior interface of them all - natural text.
barrell
7 hours ago
Not OP, but how much of the modern computing landscape bullshit that it cut was introduced in the last 5-10 years?
I think if one were to graph the progress of technology on a graph, the trend line would look pretty linear — except for a massive dip around 2014-2022.
Google searches got better and better until they suddenly started getting worse and worse. Websites started getting better and better until they suddenly got worse. Same goes for content, connection, services, developer experience, prices, etc.
I struggle to see LLMs as a major revolution, or any sort of step function change, but very easily see them as a (temporary) (partial) reset to trendline.
Lu2025
7 hours ago
Nah. It's just they are upselling us AI so aggressively it doesn't pass the sniff test anymore.
dns_snek
13 hours ago
No, your parents spoke out of ignorance and resistance towards any sort of change, I'm speaking from years of experience of both trying to use the technology productively, as well as spending a significant portion of my life in the digital world that has been impacted by it. I remember being mesmerized by GPT-3 before ChatGPT was even a thing.
The only thing that has been revolutionized over the past few years is the amount of time I now waste looking at Cloudflare turnstile and dredging through the ocean of shit that has flooded the open web to find information that is actually reliable.
2 years ago I could still search for information (let's say plumbing-related), but we're now at a point where I'll end up on a bunch of professional and traditionally trustworthy sources, but after a few seconds I realize it's just LLM-generated slop that's regurgitating the same incorrect information that was already provided to me by an LLM a few minutes prior. It sounds reasonable, it sounds authoritative, most people would accept it but I know that it's wrong. Where do I go? Soon the answer is probably going to have to be "the library" again.
All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.
anal_reactor
12 hours ago
Personally, I have three use cases for AI:
1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.
2. Conversational partner. It's a different question whether it's a good or a bad thing, but I can spend hours talking to Claude about things in general. He's expensive though.
3. Learning basics of something. I'm trying to install LED strips and ChatGPT taught me basics of how that's supposed to work. Also, ChatGPT suggested me what plants might survive in my living room and how to take care of them (we'll see if that works though).
And this is just my personal use case, I'm sure there are more. My point is, you're wrong.
> All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.
Literally same shit my parents would say while I was cross-checking multiple websites for information and they were watching the only TV channel that our antenna would pick up.
wood_spirit
12 hours ago
> Conversational partner
This is the ai holy grail. When tech companies can get users to think of the ai as a friend ( -> best friend -> only friend -> lover ) and be loyal to it it will make the monetisation possibilities of the ad fuelled outrage engagement of the past 10 years look silly.
Scary that that is the endgame for “social” media.
Applejinx
12 hours ago
People were already willing to do that with Eliza. When you combine LLMs with a bit of persistent storage, WOOF. It's gonna be extremely nasty.
Gaslight reality, coming right up, at scale. Only costs like ten degrees of global warming and the death of the world as we know it. But WOW, the opportunities for massed social control!
A4ET8a8uTh0_v2
7 hours ago
<< 1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.
I have a buddy, who made me realize how awesome FSR4 is[1]. This is likely one of the best real world uses so far. Granted, that is not LLM, but it is great at that.
[1]https://overclock3d.net/news/software/what-you-need-to-know-... [2]https://www.pcgamesn.com/amd/fsr-fidelity-fx-super-resolutio...
contrast
11 hours ago
From my perspective, your argument is:
- AI gives me huge, mediocre prints of my own shitty pictures to fill up my house with - AI means I don’t have to talk to other people - AI means I can learn things online that previously I could have learned online (not sure what has changed here!) - People who cross-check multiple websites for information have a limited perspective compared to relying on a couple of AI channels
Overall, doesn’t your evidence support the point that AI is reducing the quality of your information diet?
You paint a picture that looks exactly like the 21st century version of an elderly couple with just a few TV channels available: a few familiar channels of information, but better now because we can make sure they only show what we want them to show, little contact with other people.
andrepd
13 hours ago
The internet was at least (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.
LLMs are from the get-go a bad idea, a bullshit generating machine.
rapnie
13 hours ago
While the "move fast and break things" rushed embrace of anything AI reminds me of young wild children, who are blissfully unaware of any danger while their responsible parents try to keep them safe. It is lovely if children can believe in magic, but part of growing up involves facing reality and making responsible choices.
wiseowise
11 hours ago
Right, the same “responsible parents” who don’t know what to press so their phone plays YouTube video or don’t know how that “juicy milfs in your area” banner got in their internet explorer.
user
13 hours ago