furyofantares
8 hours ago
It never was "just predicting the next word", in that that was always a reductive argument about artifacts that are plainly more than what the phrase implies.
And also, they are still "just predicting the next word", literally in terms of how they function and are trained. And there are still cases where it's useful to remember this.
I'm thinking specifically of chat psychosis, where people go down a rabbit hole with these things, thinking they're gaining deep insights because they don't understand the nature of the thing they're interacting with.
They're interacting with something that does really good - but fallible - autocomplete based on 3 major inputs.
1) They are predicting the next word based on the pre-training data, internet data, which makes them fairly useful on general knowledge.
2) They are predicting the next word based on RL training data, which causes them to be able to perform conversational responses rather than autocomplete style responses, because they are autocompleting conversational data. This also causes them to be extremely obsequious and agreeable, to try to go along with what you give them and to try to mimic it.
3) They are autocompleting the conversation based on your own inputs and the entire history of the conversation. This, combined with 2), means you are, to a large extent, talking yourself, or rather something that is very adept at mimicing and going along with your inputs.
Who, or what, are you talking to when you interact with these? Something that predicts the next word, with varying accuracy, based on a corpus of general knowledge plus a corpus of agreeable question/answer format plus yourself. The general knowledge is great as long as it's fairly accurate, the sycophantic mirror of yourself sucks.
butvacuum
5 hours ago
[dead]