WarOnPrivacy
a month ago
The gist is down the page. I believe the assertion is sound and is worthy of consideration.
Here’s the thing: Grok didn’t say anything. Grok didn’t
blame anyone. Grok didn’t apologize. Grok can’t do any
of these things, because Grok is not a sentient entity
capable of speech acts, blame assignment, or remorse.
What actually happened is that a user prompted Grok to generate
text about the incident. The chatbot then produced a word sequence
that pattern-matched to what an apology might sound like, because
that’s what large language models do. They predict statistically
likely next tokens based on their training data.
When you ask an LLM to write an apology, it writes something that
looks like an apology. That’s not the same as actually apologizing.tim333
a month ago
I think when people say "the car says it's low on petrol" they understand the car probably didn't talk but a petrol gauge caused it to display some message. I don't know if you have to police language if people understand what's going on.
At least with LLMs it's not too hard to figure what's going on, unlike certain politicians.
afavour
a month ago
Unfortunately the discussion has been flagged. As is often the case.
ryandrake
a month ago
This is to be expected here, unfortunately. Any article that reveals anything bad about a Musk-run company will get instantly flagged. Sometimes the mods will show up and correct it, but by then the damage is done--the article has been wiped off the front page and it's Mission Accomplished for the flaggers.
ares623
a month ago
Just like human CEOs /s