troad
21 hours ago
You can opt out, but the fact that it's opt-in by default and made to look like a simple T/C update prompt leaves a sour taste in my mouth. The five year retention period seems... excessive. I wonder if they've buried anything else objectionable in the new terms.
It was the kick in the pants I needed to cancel my subscription.
wzdd
20 hours ago
Everywhere else in Anthropic's interface, yes/no switches show blue when enabled and black when disabled. In the box they're showing about this change the slider shows grey in both states: visit it in preferences to see the difference! It's not just disappointing but also kind of sad that someone went to the effort to do this.
senko
20 hours ago
Just did and it behaves as expected for me in the Android app (ie. not the dark pattern you described)
BalinKing
18 hours ago
I can confirm it's grey on both sides on the website.
tln
17 hours ago
I get blue (on) / black (off) on the website. Or blue / white in light mode.
https://claude.ai/settings/data-privacy-controls
It was easy to not opt-in, I got prompted before I saw any of this.
I think they should keep the opt-in behavior past Sept 28 personally.
IAmGraydon
14 hours ago
They’re likely A/B testing the interface change, which is why people are getting inconsistent results
Aurornis
17 hours ago
It works correctly (blue on, grey off) in the iOS app. I just did it now.
riz_
20 hours ago
This is probably because there are laws in some countries that restrict how these buttons/switches can look (think cookie banners, where sometimes there is a huge green button to accept, and a tiny greyed out text somewhere for the settings).
soulofmischief
18 hours ago
Can you provide an example?
merelysounds
20 hours ago
> opt-in by default
Nitpicking: “opt in by default” doesn’t exist, it’s either “opt in”, or “opt out”; this is “opt out”. By definition an “opt out” setting is selected by default.
benterix
20 hours ago
This is not nitpicking, this is a sane reaction to someone modifying the meaning of words on the fly.
klabb3
20 hours ago
To be fair it trips people up all the time. Even precise terminology isn't great if people misuse it. Maybe it would have been better to just use "enabled by default".
troad
18 hours ago
The original meaning of sane is "physically healthy". Its usual modern meaning is "mentally healthy". You're using it to mean "reasonable".
At which exact point is language prohibited from evolving, and why it super coincidentally the exact years you learnt it?
danans
18 hours ago
> At which exact point is language prohibited from evolving
Never?
troad
18 hours ago
Yes, that was my point.
card_zero
15 hours ago
And here it is, evolving before your eyes: we're killing off the maladaptive mutant which was "opt-in by default". That's the evolution that is happening here.
troad
9 hours ago
That would not be evolution, that would be an attempt at creationism. There is no evolution police, and never will be.
danparsonson
8 hours ago
Selection pressure is the evolution police.
card_zero
6 hours ago
It would be fair to compare it to selective breeding, rather than natural selection. The flip side of rejecting usage is promoting neologisms. We can do both things deliberately, I see no rule saying that language is only allowed to evolve naturally. A reasonable criticism would be that trying to change it on purpose makes for a lot of unnecessary fuss, but we can be moderate about it.
soraminazuki
15 hours ago
Diluting the distinction between opt-in and opt-out is gaslighting, not "evolution."
troad
9 hours ago
That seems like an ungenerous and frankly somewhat hysterical take.
By default, you are opted in. Perfectly clear.
The purpose of language is communication, not validating your politics.
soraminazuki
7 hours ago
> By default, you are opted in. Perfectly clear.
That's called opt-out. You're doing exactly what I described: gaslighting people into believing that opt-in and opt-out are synonymous, rendering the entire concept meaningless. The audacity of you labeling people as "political" while resorting to such Orwellian manipulation is astounding. How can you lecture others about the purpose of languages with a straight face when you're redefining terms to make it impossible for people to express a concept?
These are examples of what "opt-in by default" actually means. It means having the user manually consent to something every time, the polar opposite your definition.
- https://arstechnica.com/gadgets/2024/06/report-new-apple-int...
- https://github.com/rom1504/img2dataset/issues/293
It's also just pure laziness to label me as "hysterical" when PR departments of companies like Google have, like you, misused the terms opt-out and opt-in in deceptive ways.
Nevermark
an hour ago
I completely agree with you from a correctness standpoint, ...
> Diluting the distinction between opt-in and opt-out is gaslighting
> That seems like an ungenerous and frankly somewhat hysterical take.
... however, this comment was a reasonable response.
Projective framing demonstrates your own lack of concern for accuracy, clarity or conviviality, that is 180 degrees at odds with the point you are making and the site you are making it on.
tln
17 hours ago
> By definition an “opt out” setting is selected by default.
No, (IMO) an "opt out" setting / status is assumed/enabled without asking.
So, I think this is opt-in, until Sept 28.
Opt-in, whether pre-checked/pre-ticked or not, means the business asks you.
GDPR requires "affirmative, opt-in consent", perhaps we use that term to mean an opt-in, not pre-ticked.
whilenot-dev
16 hours ago
Regardless whether it's opt-in or opt-out, the business will need to confirm anything it opted for you by asking. If you don't select the opposing choice in a timely fashion, then the business assumes that it opted correctly in your interest and on your behalf.
> So, I think this is opt-in, until Sept 28.
If the business opted for consent, then you will effectively have the choice for refusal, a.k.a. opt-out.
I_am_tiberius
20 hours ago
"five year retention". If it's in a model once, it's there forever.
whimsicalism
19 hours ago
yes, it’s a very big loophole. and if it’s a generative model, you can just launder the data through synthetic generation/distillation to future models
Hnrobert42
20 hours ago
Is that true? Do models get rebuilt from scratch each time or do they get iterated on?
I_am_tiberius
20 hours ago
I believe the big models currently get built from scratch (with random starting weights). That wasn't my point though. I meant a model created once, might be used for a very long time. Maybe they even release the weights at one point ("open source").
disconcision
18 hours ago
this is somewhat true but i'm not sure how load bearing it is. for one, i think it's going to be a while until 'we asked the model what bob said' is as admissible as the result of a database query
JohnnyMarcone
20 hours ago
I got a pop-up when I opened the app explaining the change and an option to opt out. That seems very transparent to me.
elashri
20 hours ago
> That seems very transparent to me
Implicit consent is not transparent and should be illegal in all situations. I can't tell you that unless you opt out, You have agreed to let me rent you apartment.
You can say analogy is not straightforward comparable but the overall idea is the same. If we enter a contract for me to fix your broken windows, I cannot extend it to do anything else in the house I see fit with Implicit consent.
mystraline
15 hours ago
As a real world counterexample, medical in the USA does this shit all the time.
Local office will do a blood draw, send it to a 3rd party analysis which isn't covered by insurance, then bill you full. And you had NO contractual relationship with the testing company.
Same scam. And its all because our government is completely captured by companies and oligopoly. Our government hasn't represented the people in a long time.
cube00
20 hours ago
> That seems very transparent to me.
Grabbing users during start up with the less privacy focused option preselected isn't being "very transparent"
They could have forced the user to make a choice or defaulted to not training on their content but they instead they just can't help themselves.
felideon
19 hours ago
> seems very transparent
Except not:
> The interface design has drawn criticism from privacy advocates, as the large black "Accept" button is prominently displayed while the opt-out toggle appears in smaller text beneath. The toggle defaults to "On," meaning users who quickly click "Accept" without reading the details will automatically consent to data training.
Definitely happened to me as it was late/lazy.
ornornor
18 hours ago
It’s not. And also whether you move the toggle to on or off, you still have to click accept which really isn’t clear whether you’re accepting to share your data or not.
Never mind the complete 180 on privacy.
oblio
20 hours ago
Opt-in leads to very low adoption and is the moral choice.
Opt-out leads to very high adoption and is the immoral choice.
Guess which one companies adopt when not forced through legislation?
insane_dreamer
19 hours ago
It should be off be default, with the option to opt in.
DrillShopper
20 hours ago
It should be opt-in, not opt-out.
The fact that there's no law mandating opt-in only for data retention consent (or any anti-consumer "feature") is maddening at times
jmward01
10 hours ago
The 5 year is the real kicker. Over the next 5 years I find it doubtful that they won't keep modifying their TOS and presenting that opt out 'option' so that all it will take is one accidental click and they have all your data from the start. Also, what is to stop them from removing the opt out? Nothing says they have to give that option. 4 years and 364 days from now TOS change with no opt out and a retention increase to 10 years. By then the privacy decline will have already have been so huge nobody will even notice that this 'option' was never even real.
Joker_vD
20 hours ago
> You can opt out
You can say that you want to opt out. What Anthropic will decide to do with your declaration is a different question.
AlexandrB
19 hours ago
I look forward to this setting getting turned on again "accidentally" when new models are released or the ToS is updated.
monegator
20 hours ago
I'm super duper sure that my data won't be stored and eventually used if i opt out
episteme
20 hours ago
What will you use instead? I’m finding Claude the best experience since ChatGPT 5 is so slow and not any better answers than 4.
teekert
20 hours ago
Granted, it is a stretch and not near the features of Claude (no code etc), but at least Proton's Lumo [0] is very privacy oriented.
I have to admit, I've used it a bit over the last days and still reactivated my Claude pro subscription today so... Let's say it's ok for casual stuff? Also useful for casual coding questions. So if you care about it, it's an option.
soiltype
17 hours ago
Since I don't use LLMs to directly code for me, I'm going to (mis?)place my trust in Kagi assistant entirely for the time being. It claims not to associate prompts with individual accounts. Small friction of keeping a browser tab open is worth it for me for now.
nocommandline
16 hours ago
If you aren't using it for coding or advanced uses like video, etc, you can try running models locally on your machine using Ollama and others like it.
Self plug here - If you aren't technical and still want to run models locally, you can try our App [1]
weregiraffe
18 hours ago
>What will you use instead? I’m finding Claude the best experience since ChatGPT 5 is so slow and not any better answers than 4.
You could try programming with your own brain
javierluraschi
20 hours ago
ehnto
20 hours ago
From the frypan into the fire. I think the reality, proven by history and even just this short five years, is no company will hold onto their ethics in this space. This should surprise no one since the first step of the enterprise is hoovering up the worlds data without permission.
Arubis
20 hours ago
Worse by every measure.
weberer
17 hours ago
What metrics are you looking at? Grok 4 outperforms Claude 4 Opus in the Artificial Analysis Intelligence Index.
mac-attack
19 hours ago
What sane person would downgrade to Grok
javcasas
20 hours ago
You can request your data to not be used. Your request will appropriately be read and redirected to /dev/null.
darepublic
15 hours ago
it's almost like this multi billion dollar company is misanthropic, despite their platitudes. Should I not hold my breath on Anthropic helping facilitate "an era of AI abundance for all"? (To quote a rejected PR applicant to Anthropic from the front page)
smallerfish
20 hours ago
Settings > Privacy > Privacy Settings
kossTKR
20 hours ago
i don't see any setting related to this? just:
Export data
Shared chats
Location metadata
Review and update terms and conditions
I'm in the EU, maybe that's helping me?
croes
20 hours ago
Have you clicked "Review and update terms and conditions"?
It's part of the update
kossTKR
20 hours ago
Oh i see thanks. That's a dark design pattern, hiding stuff like that.
No one cares about anything else but they have lots of superflous text and they are calling it "help us get better", blah blah, it's "help us earn more money and potentially sell or leak your extremely private info", so they are lying.
Considering cancelling my subscription right this moment.
I hope EU at leat considers banning or extreme-fining companies trying to retroactively use peoples extremely private data like this, it's completely over the line.
klabb3
19 hours ago
EU or not, it baffled me that people don't see this glaring conflict of interest. AI companies both produce the model and rent out inference. In other words, you're expecting that the company that (a) desperately crave your data the most and (b) that also happen to collect large amounts of high quality data from you will simply not use it. It's like asking a child to keep your candy safe.
I'd love to live in a society where laws could effectively regulate these things. I would also like a Pony.
croes
18 hours ago
>It's like asking a child to keep your candy safe
That's why we don't hand billions of dollars to a child. Maybe we should treat AI companies similar.
kossTKR
19 hours ago
This is why we need actual regulation, and not the semi fascist monopolist corporatocracy we've evolved into now.
Its only utopian because it's become so incredibly bad.
We shouldn't expect less, we shouldn't push guilt or responsibility onto the consumer we should push for more, unless you actively want your neighbour, you mom, and 95% of the population to be in constant trouble with absolutely everything from tech to food safety, chemicals or healthcare - most people aren't rich engineers like on this forum and i don't want to research for 5 hours every time i buy something because some absolute psychopaths have removed all regulation and sensible defaults so someone can party on a yacht.
frm88
18 hours ago
Bravo! This has to be the most coherent and well-formulated rant I have read in a longtime. Thank you!
kordlessagain
20 hours ago
> It was the kick in the pants I needed to cancel my subscription.
As if barely two 9s of uptime wasn't enough.
ethagnawl
18 hours ago
I wonder what happens if I don't accept the new T&C? I've been successfully dismissing an updated T&C prompt in a popular group messaging application for years -- I lack the time and legal acumen to process it -- without issue.
Also, for others who want to opt-out, the toggle is in the T&C modal itself.
layer8
17 hours ago
The new privacy policy automatically becomes effective on September 28, if you don’t already agree to it before. Anthropic states that “After September 28, you’ll need to make your selection on the model training setting in order to continue using Claude.”
nicce
17 hours ago
I tried to do that with WhatsApp and it eventually stopped working.
energy123
18 hours ago
Has anyone asked why OpenAI has two very separate opt-out mechanisms (one in settings, the other via a formal request that you need to lodge via their privacy or platform page)? That always seemed likely to me to be hiding a technicality that allows them to train on some forms of user data.
nicce
18 hours ago
OpenAIs temporary chat still advertises that chats are stored for 30 days while there is court order that everything must be retained indefinitely. I wonder why they are not obligated to state this quite extreme retention.
demarq
20 hours ago
Are you sure the opt out isn’t only training? The retention does not seem affected by the toggle.
jasona123
20 hours ago
From the PR update: https://www.anthropic.com/news/updates-to-our-consumer-terms
“If you do not choose to provide your data for model training, you’ll continue with our existing 30-day data retention period.“
From the support page: https://privacy.anthropic.com/en/articles/10023548-how-long-...
“If you choose not to allow us to use your chats and coding sessions to improve Claude, your chats will be retained in our back-end storage systems for up to 30 days.”
zenmaster10665
20 hours ago
it seems really badly designed or maybe it is meant to be confusing. It does not make it clear that the two are linked together, and you have to "accept" the both together even though there is only a toggle on the "help us make the model better" item.
perihelions
20 hours ago
What are you replacing it with?
troad
20 hours ago
Two weeks left in the sub to figure it out, but I'm not yet sure. I was never all in on all the tooling, I mostly used it as smart search (e.g. ImageMagick incantations) and for trivial scripting that I couldn't be bothered writing myself, so I might just stick to whatever comes with Kagi, see if that doesn't cover me.
perihelions
20 hours ago
How does Kagi (claim that they) enforce privacy rights on the major LLM providers? Have they negotiated a special contract?
I'm looking at
> "When you use the Assistant by Kagi, your data is never used to train AI models (not by us or by the LLM providers), and no account information is shared with the LLM providers. By default, threads are deleted after 24 hours of inactivity. This behavior can be adjusted in the settings."
https://help.kagi.com/kagi/ai/assistant.html#privacy
And trying to reconcile those claims with the instant thread. Anthropic is listed as one of their back-end providers. Is that data retained for five years on Anthropic's end, or 24 hours? Is that data used for training Anthropic models, or has Anthropic agreed in writing not to, for Kagi clients?
FergusArgyll
20 hours ago
They are using llm's through the API where it's the b2b world and you can get privacy
vinnyorvinny
20 hours ago
There is an option to opt out right? So I assume they just make sure to always opt out.
fnordlord
20 hours ago
I'm mostly replying because I was truly using it for an ImageMagick incantation yesterday. I use the API rather than chat, if that's an option for you. I put $20 into it every few months and it mostly does what I need. I'm using Raycast for quick and dirty questions and AnythingLLM for longer conversations.
ivape
20 hours ago
I like think using OpenRouter is better, but there’s absolutely no guarantee from any of the individual providers with respect to privacy and no logging.