gmuslera
3 days ago
I think it adds value. Having a conversation with/around a book/document with an AI is a good use case, and having that feature as a not forced option in a book management solution a good match.
It is not something that works regardless if we configure or activate it or not. It may broaden the AI use for people that find that useful? Yes. Would that end being dependency on a particular provider? Maybe on how we use it. At some point a lot of those decisions were taken in the past by most of the rest, like using search engines or a narrow/builtin set of browsers or desktop/mobile OSs. If using AIs is a concern then the ship has sailed long ago for many bigger things already.
Arodex
3 days ago
You are not "having a conversation". Stop anthropomorphizing. You are interacting with a machine which has its singular inhuman workings, developed and kept on a leash by a megacorporation.
Will it report me if I try to discuss "The anarchist's cookbook" with it? Will it try to convince me the "Protocols of the sages of Sion" is real? Will it encourage me to follow the example of the main character in "Lolita"? Will it cast in a bad light any gay or transsexual character because the megacorp behind it is forced to toe the anti-woke line of the US government in order to keep its lucrative military and anti-immigration contracts?
gmuslera
3 days ago
I'm interacting with a language model, using language and normal phrases. That is basically a conversation from my point of view, as is mostly indistinguishable from saying the same and getting similar answers that I could get from a real person that had read that book. No matter what is in the other side, because we are talking about the exchanged text and not the participants or what possibly they could had in their minds or chips.
VertanaNinjai
3 days ago
If you’re against anthropomorphism of LLMs then how can it “encourage” you if you’re not having a conversation? How could it “convince” you of anything or cast something in a bad light without conversing?
Your point about censorship, however, I fully agree with.
janice1999
3 days ago
> If you’re against anthropomorphism of LLMs then how can it “encourage” you if you’re not having a conversation?
Humans are more than biased word predictors.
halJordan
3 days ago
That has nothing to do with the guy who said stop anthropomorphizing llms and then proceeded to anthropomorphize an llm.
e-khadem
2 days ago
Safety is a valid concern in general. But avoidance not the right way to approach it. Democratizing the access to such tools (and developing a somewhat open ecosystem around it) for researchers and the general public is the better way IMO. This way people with more knowledge (not necessarily technical. For example philosophers) can experiment and explore this space more and guide the development going forward.
Also, the base assumption of every prospering society is a population that cares about and values their freedom and rights. If the society drifts towards becoming averse to learning about these virtues ... well, there will be consequences (and yes, we are going this way. For example look at the current state of politics, wealth distribution, and labor rights in the US. People would have been a lot more resentful to this in the 1960s or 70s.)
The same is true about AI systems. If the general public (or at least a good percentage of the researchers) study it well enough, they will force this alignment with true human values. Contrary to this, censorship or less equitable / harder access and later evaluation is really detrimental to this process (more sophisticated and hazardous models will be developed without any feedback from the intellectuals / the society. Then those misaligned models can cause a lot of harm in the hands of a rogue actor).
emodendroket
a day ago
> Will it report me if I try to discuss "The anarchist's cookbook" with it?
I don’t know. Weren’t you already running that risk with “download metadata”?