If you have a Claude account, they're going to train on your data moving forward

394 pointsposted 21 hours ago
by diggan

202 Comments

TheRoque

17 hours ago

To be honest, these companies already stole terabytes of data and don't even disclose their dataset, so you have to assume they'll steal and train at anything you throw at them

marssaxman

16 hours ago

"Reading stuff freely posted on the internet" constitutes stealing now?

Seems like an excessively draconian interpretation of property rights.

michaelmior

16 hours ago

"Reading stuff freely posted on the internet" is also very different from a business having machines consume large volumes of data posted on the Internet for the purpose of generating value for them without compensating the creators. I'm not making a value judgement one way or the other, but "reading stuff freely posted on the Internet" is an oversimplification.

marssaxman

16 hours ago

Okay, but "stealing" is also an oversimplification, to the point of absurdity.

It makes no sense to put stuff up on the internet where it can freely be downloaded by anyone at any time, by people who are then free to do whatever they like with it on their own hardware, then complain that people have downloaded that stuff and done what they liked with it on their own hardware.

"Having machines consume large volumes of data posted on the Internet for the purpose of generating value for them without compensating the creators" is equally a description of Google.

schwartzworld

9 hours ago

What if that data isn’t publicly posted? For example, copilot regurgitating code from private repos, complete with comments.

ehnto

15 hours ago

They are not free to do whatever they like, there are tomes of laws across all countries governing what someone can and cannot do with your intellectual property. Just because we didn't have the foresight to add in a "if by chance in the future someone invents artificial intelligence, that's not fair use" is a shame, but doesn't make what these companies are doing ethical or morale.

I don't disagree regarding Google, I also think they exploited others IP for their own gain. It was once symbiotic with webmasters, but when that stopped they broke that implied good faith contract. In a sense, their snippets and widgets using others IP and no longer providing traffic to the site was the warning shot for where we are now. We should have been modernising IP laws back then.

marssaxman

15 hours ago

I did say "free to do whatever they like on their own hardware", because intellectual property laws generally govern the transfer of such property rather than the use.

After seeing the harm done by the expansion of patent law to cover software algorithms, and the relentless abuse done under the DMCA, I am reflexively skeptical of any effort to expand intellectual property concepts.

godelski

13 hours ago

  > on their own hardware
That doesn't make it technically legal. That only makes it not worth pursuing. You can sue Joe Schmoe for a million dollars but if he doesn't have that then you're not getting a dime. But if Joe Schmoe is using that thing to make money, well then... yeah you bet your ass that's a different situation and the "worth" of pursuing is directly proportional to how much he is making. Doesn't matter if it is his own hardware or not.

Like why do you think who owns the hardware even matters? Do you really think the legality changes if I rent a GPU vs use my own? That doesn't make any sense.

marssaxman

9 hours ago

In terms of copyright law, it matters very much whether Joe Schmoe is using his own copy of the data for his own purposes, or whether he is making more copies and distributing them to other people.

If the AI companies were letting people download copies of their training data, copyright law would certainly have something to say about that. But no: once they download the training data, they keep it, and they don't share it.

godelski

9 hours ago

  > using his own copy of the data
Yes? That is a different thing? I guess we can keep moving the topic until we're talking about the same topic if you want. But honestly, I don't want to have that kind of conversation.

marssaxman

8 hours ago

How is it a different thing? Are we talking about copyright law, or not?

nerdponx

4 hours ago

It's not about the downloading of the data, it's about its use in training models, which is dubious from a copyright perspective.

vunderba

13 hours ago

> "Having machines consume large volumes of data posted on the Internet for the purpose of generating value for them without compensating the creators" is equally a description of Google.

Quid pro quo. Those sites also received traffic from the audiences searching using Google. "Without compensation" really only became a thing when Google started adding the inlined cards which distilled the site's content thus obviating the need for a user to visit the aforementioned site.

godelski

13 hours ago

I'm not sure quid pro quo even matters. A search engine is more like providing a taxi service. You're just taking people to a place.

Now the AI summaries are a different story. One where there is no quid pro quo either. It's different when that taxi service will also offer the same service as that business. It's VERY different when that taxi service will walk into that business, take their services free of charge[0], and then transfer that to the taxi customer.

[0] Scraping isn't going to offer ad revenues

[Side note] In our analogy the little text below the link it more like the taxi service offering some advertising or some description of the business. Bit more gray here but I think the quid pro quo phrase applies here. Taxi does this to help customer find the right place to go, providing the business more customers. But the taxi isn't (usually) replacing the service itself.

sobkas

13 hours ago

Proper term for it is Computer Assisted Plagiarism, CAP for short. Also, I really hope that Google doesn't claim it created sites it crawl for search their engine.

uncletscollie

11 hours ago

That is not at all how the internet works. Try to download music from Napster and Lars will sue your ass.

marssaxman

8 hours ago

No he certainly will not; you will only get sued if you upload Lars' music to share with other people. If you download an illegal copy, the person you downloaded from is the one breaking the law.

godelski

13 hours ago

  > where it can freely be downloaded by anyone at any time, by people who are then free to do whatever they like with it on their own hardware
I think you have a strong misunderstanding of the law and the general expectation of others.

I'd like to remind you that a lot of celebrities face legal issues for posting photos of themselves. Here's a recent example with Jennifer Lopez[0]. The reason these types of lawsuits are successful is because it is theft of labor. If you hire a professional photographer to take photos of your wedding then the contract is that the photographer is handing over ownership of the photos in exchange of payment. The only difference here is that the photo was taken before a contract was made. The celebrity owns the right to their body and image, but not to the photograph.

Or think about Open Source Software. Just because it is posted on GitHub does not mean you are legally allowed to use it indiscriminately. GitHub has licenses and not all of them are unrestricted. In fact, a repo without a license does not mean unfettered usage. The default is that the repo owner has the copyright[1].

  > You're under no obligation to choose a license. However, without a license, the default copyright laws apply, meaning that you retain all rights to your source code and no one may reproduce, distribute, or create derivative works from your work.
A big part of what will make a lawsuit successful or not is if the owner has been deprived of compensation. As in, if you make money off of someone else's work. That's why this has been the key issue in all these AI lawsuits. Where the question is about if the work is transformative or not. All of this is in new legal territory because the laws were not written with this usage in mind. The transformative stuff is because you need to allow for parody or referencing. You don't want a situation where, say... someone including a video of what the president has said to discuss what was said[2]. But this situation is much closer to "Joe stole a book, learned from that book, and made a lot of money through the knowledge that they obtained from this book AND would not have been able to do without the book's help." Just, it's usually easier to go after the theft part of that situation. It's definitely a messy space.

But basically, just because a piece of art exists on public property does not mean you have the right to do whatever you want with it.

  >  is equally a description of Google.
Yes and no. The AI summaries? Yeah. The search engine and linking? No. The latter is a mutually beneficial service. It's one thing to own a taxi service and it is another to offer a taxi service that will walk into a starbucks take a random drink off the counter and deliver it to you. I'm not sure why this is difficult to understand.

[0] https://www.bbc.com/news/articles/cx2qqew643go

[1] https://docs.github.com/en/repositories/managing-your-reposi...

[2] https://www.youtube.com/watch?v=tUnRWh4xOCY

pigeons

14 hours ago

But they didn't only train on information the creators made freely available. They trained on copyrighted materials obtained illicitly.

pigeons

7 hours ago

I know we're not supposed to comment about downvotes, but the original comment was talking about "these companies", and none of the information indicating that they, or at the very least Meta, trained on terabytes of books downloaded from zlib and libgen and other torrent sites, is in dispute. So even if you believe that copyright should not exist, I don't see why this is not a valid dispute of the parents argument that they only trained on information creators made freely available.

bdamm

16 hours ago

We didn't seem to mind when Google was doing it back in 1999, or Lycos, Altavista, etc before them... why do we care about the LLM companies doing it now?

codazoda

15 hours ago

I find LLMs extremely useful but I think the difference is that they regurgitate the content (not verbatim) instead of a link to it. This is not unlike how a human might tell their friend about it.

bdamm

13 hours ago

Google has been regurgitating content right into search results since the very beginning, and they've been providing "synopsis" type of results for over a decade.

Nevermark

15 hours ago

> This is not unlike how a human might tell their friend about it.

Is there someone who has read the whole internet? Can we all be there friend?

The entire basis of fair use is scale matters.

nbulka

15 hours ago

Because they have terms of service they have to adhere to. We need laws to be lawful.

senko

13 hours ago

I consumed large volumes of data posted on the internet for decades, which generated a lot of value for me, without compensating the creators.

The only difference is that I (presumably) have a soul.

gist

8 hours ago

> "Reading stuff freely posted on the internet" is also very different from a business having machines consume large volumes of data posted on the Internet for the purpose of generating value for them without compensating the creators.

The fact that value is being created is irrelevant. The fact that they are making profit is irrelevant. As is non compensation to creators. There isn't any law being broken. Is there?

Bottom line in real world terms there is no expectation of privacy with a freely open and unrestricted web site. Even if that website said 'you can use this for single use but not mass use' that in itself is not legally or practically enforceable.

Let's take the example of a Christmas light show. The idea might be (in the homeowners mind) that people, families, will drive by in their cars to enjoy the light show (either a single home or the entire street or most of it). They might think 'we don't want buses full of people who paid to ride the bus' coming down the street. Unfortunately there is no way to prevent that (without the city and laws getting involved) and there is nothing wrong with the fact that the people who provide the bus are making money bringing people to see the light show.

jMyles

7 hours ago

> "Reading stuff freely posted on the internet" is also very different from a business having machines consume large volumes of data

...not if you believe in the right of general-purpose computing. If they have the right to read the data, why don't they have a right to program a computer to do it for them?

I think we all agree that they're not the good guys here, but this reasoning in particular is troubling.

TheRoque

16 hours ago

I'm not talking about that, I'm taking about downloading gigabytes of books, and movies and who knows what data (since it's not disclosed) without paying. Those are not freely posted on the internet. Well, not legally anyways.

themafia

12 hours ago

Faithfully reproducing something you've previously read while passing it off as your own original work is a violation of the most basic tenets of intellectual property rights.

Sohcahtoa82

11 hours ago

This is a quintessential bad faith comment.

The reference to terabytes of stolen data refers to copyrighted material. I think you know this but chose to frame it as "stuff freely posted on the internet" in order to mislead and strawman the other comment.

marssaxman

11 hours ago

I meant it exactly as I said it. I do not agree that any theft occurred, either in law or in spirit, and I believe that reinterpretation of intellectual-property law in order to make it a crime would cause significant harm, greatly outweighing the benefits, as has been the case with every other expansion of intellectual property law I have seen.

fcarraldo

10 hours ago

Anthropic downloaded books from Library Genesis and The Pirate Library mirror. This is factual and reported on from court documents.

What’s the angle that describes this as fair use?

[0] https://www.businessinsider.com/anthropic-cut-pirated-millio...

marssaxman

9 hours ago

The simple fact that they are not republishing any of that data. Fair use does not apply, because copyright does not apply, because nothing is being copied.

Wowfunhappy

7 hours ago

So you don't think downloading something from The Pirate Bay constitutes copyright infringement provided you don't republish it?

marssaxman

6 hours ago

Precisely. The person sharing is the one breaking the law.

TheRoque

5 hours ago

That's factually wrong, downloading without sharing is also illegal.

WA

12 hours ago

Forgot the 82TB of torrented books Meta has been using for training? I mean, yeah, it’s Meta. No surprise. But I won’t believe for one second that the other players didn’t do a similar thing. They just haven’t been caught yet.

exe34

13 hours ago

so I can take a screenshot from a movie trailer on YouTube and sell posters of it now? I thought copyright still applied to the poor.

timeon

16 hours ago

What "reading"?

marssaxman

16 hours ago

The same reading search engine crawlers have been doing since time immemorial.

ehnto

15 hours ago

No one gave them permission to access their webservers back then either. Before it's cited that there is precedent in law, that is in the US. No such precedent exists in my country, and our laws suggest that unauthorized access regardless of "gates up or down" would constitute trespassing. There are also no protections for scrapers coming out of prior lawsuits, and copying copyrighted material is of course illegal.

Which is just to point out that the world wide web is not its own jurisdiction, and I believe AI companies are going to be finding that an ongoing problem. Unlike search, there is no symbiosis here, so there is an incentive to sue. The original IP holders do not benefit in any way. Search was different in that way.

TheRoque

16 hours ago

Search engines never claimed that their content was orignal, and redirect to the original author (which gets proper retribution)

kridsdale1

16 hours ago

Looking at and gaining knowledge.

nbulka

15 hours ago

No you don't. You don't have to assume people are going to be bad! We should not normalize it either.

kolektiv

15 hours ago

You don't have to assume people are going to be bad, but it's reasonable and prudent to expect it from people who have already shown themselves to be so (in this context).

I trust people until they give me cause to do otherwise.

nbulka

15 hours ago

Training on personal data people thought was going to remain private vs. stuff out in public view (copyright or not), are two different magnitudes of ethics breaches. Opt OUT instead of Opt IN for this is CRAZY in my opinion. I hope that the reddit post is WRONG on that detail but I seriously doubt it.

I asked Claude: "If a company has a privacy policy and says they will not train on your data and then decides to change the policy in order "to make the models better for everyone." What should the terms be?"

The model suggests in the first paragraph or so EXPLICIT OPT IN. Not Opt OUT

locallost

14 hours ago

No, nbulka is correct. People should not shrug off and accept things that are wrong just because it's to be expected. It's one of the worst things you can do because as already pointed out, it just normalizes wrong.

szczepano

14 hours ago

You can and should safely assume people can do anything that's possible to do. Weather something is bad or good is a term of historical debate.

nickpsecurity

12 hours ago

Yours is the sanest interpretation of this.

AlecSchueler

20 hours ago

Am I the only one that assumed everything was already being used for training?

simonw

18 hours ago

The hardest problem in computer science in 2025 is convincing people that you aren't loading every piece of their personal data into a machine learning training run.

The cynic in me wonders if part of Anthropic's decision process here was that, since nobody believes you when you say you're not using their data for training, you may as well do it anyway!

Giving people an opt-out might even increase trust, since people can now at least see an option that they control.

behnamoh

17 hours ago

> The cynic in me wonders if part of Anthropic's decision process here was that, since nobody believes you when you say you're not using their data for training, you may as well do it anyway!

This is why I love-hate Anthro, the same way I love-hate Apple. The reason is simple: Great product, shitty MBA-fueled managerial decisions.

Aurornis

16 hours ago

I don't understand this mindset. Why would you assume anything? It took me a couple minutes at most to check when I first started using Claude.

I check when I start using any new service. The cynical assumption that everything's being shared leads to shrugging it off and making no attempt to look for settings.

It only takes a moment to go into settings -> privacy and look.

lbrito

16 hours ago

>Why would you assume anything?

Because they already used data without permission on a much larger scale, so it's a perfectly logical assumption that they would continue doing so with their users?

simonw

14 hours ago

I don't think that logically makes sense.

Training on everything you can publicly scrape from the internet is a very different thing from training on data that your users submit directly to your service.

rpgbr

10 hours ago

>Training on everything you can publicly scrape from the internet is a very different thing from training on data that your users submit directly to your service.

Yes. It's way easier and cheaper when the data comes to you instead of having to scrape everything elsewhere.

fcarraldo

10 hours ago

OpenAI, Meta and X all train from user submitted data, in Meta and X’s case data that had been submitted long before the advent of LLMs.

It’s not a leap to assume Anthropic does the same.

adastra22

2 hours ago

By X do you mean tweets? Can you not see how different that is from training on your private conversations with an LLM?

What if you ask it for medical advice, or legal things? What if you turn on Gmail integration? Should I now be able to generate your conversations with the right prompt?

hshdhdhj4444

16 hours ago

Huh, they’re not assuming anything is “being shared”.

They’re assuming that Anthropic that is already receiving and storing your data, is also training their models on that data.

How are you supposed to disprove that as a user?

Also, the whole point is that companies cannot be trusted to follow the settings.

simonw

14 hours ago

Why can't companies be trusted to follow the settings?

If they add those settings why would you expect they wouldn't respect them? Do you think they're purely cosmetic features that don't actually do anything?

fcarraldo

10 hours ago

Also currently being discussed[0], on this very site, is both speculation that Meta is surreptitiously scanning your camera roll and a comment claiming that they worked on an earlier implementation to do just that.

It’s shocking to me that anyone who works in our industry would trust any company to do as they claim.

[0] https://news.ycombinator.com/item?id=45062910

themafia

12 hours ago

> I check when I start using any new service.

So your assumption is that the reported privacy policy of any company is completely accurate. There there is no means for the company to violate this policy and that once violated you will immediately be notified.

> It only takes a moment to go into settings -> privacy and look.

It only takes a moment to examine history and observe why this is wholly inadequate.

efficax

16 hours ago

A silicon valley startup would never say one thing and then do another!

Capricorn2481

16 hours ago

> It only takes a moment to go into settings -> privacy and look.

Do you have any reason to think this does anything?

serial_dev

16 hours ago

Jira ticket Nr 97437838. Training service ignores settings, trains on your data anyway. Priority: extremely low. Will probably do it in 2031 when the intern joins.

nbulka

16 hours ago

!!!!!!!!!! this... all the times HIPAA and data privacy laws get ignored directly in Jira tickets too. SMH

UltraSane

13 hours ago

Because the demand for training data is insatiable and they already are using basically everything available and they need more human generated data and chats with their own LLM is a perfect source.

sjapkee

10 hours ago

Bro really thinks privacy settings work

hexage1814

19 hours ago

This. It's the same innocence of people who believe when you delete a document on Google/META/Apple/Microsoft servers, it "really" gets deleted. Google most likely has a backup of every piece of information indexed by them in the last 20 years or so. It would cause envy to the Internet Archive.

giancarlostoro

19 hours ago

With the privacy laws out there, I do genuinely think they eventually get purged even from backups. I remember there being a really cool YouTube video shared here on HN that google no longer has publicly, it was about the process of an email and all the behind the scenes things, like physical security into a data center, to their patented hard drive shredders they use once the hard drives are to be tossed. I wish Google had kept that video public and online, it was a great watch.

I know once you delete something on Discord its poof, and that's the end of that. I've reported things that if anyone at Discord could access a copy of they would have called police. There's a lot of awful trolls on chat platforms that post awful things.

diggan

18 hours ago

> I know once you delete something on Discord its poof, and that's the end of that. I've reported things that if anyone at Discord could access a copy of they would have called police. There's a lot of awful trolls on chat platforms that post awful things.

That's not what Discord themselves say, is that coming from Discord, the police or someone else?

> Once you delete content, it will no longer be available to other users (though it may take some time to clear cached uploads). Deleted content will also be deleted from Discord’s systems, but we may retain content longer if we have a legal obligation to preserve it as described below. Public posts may also be retained for 180 days to two years for use by Discord as described in our Privacy Policy (for example, to help us train models that proactively detect content that violates our policies). - https://support.discord.com/hc/en-us/articles/5431812448791-...

Seems to be something that decides if the content should be deleted faster, or kept for between 180 days - 2 years. So even for Discord, "once you delete something on Discord its poof" isn't 100% accurate.

giancarlostoro

18 hours ago

At least in terms of reporting content to "Trust and Safety" they certainly behave like its gone forever. I have had friends report illegal content, to both Discord and law enforcement, the take away seemed like it was gone, now it's making me wonder if Discord is really archiving CSAM material for two years and not helping law enforcement unless a proper warrant is involved, yikes.

diggan

17 hours ago

> now it's making me wonder if Discord is really archiving CSAM material for two years and not helping law enforcement unless a proper warrant is involved

Yes, of course, to both of those. Discord is a for-profit business with limited amount of humans who can focus on things, so the less they can focus on, the better (in the mind of the people running the business at least). So why do anything when you can do nothing, and everything stays the same? Of course when someone has an warrant, they really have to do something, but unless there is, there is no incentive for them to do anything about it.

conradev

18 hours ago

My understanding is that for Gmail specifically, they keep a record of every email ever received regardless of deletion status, but I'm not able to find any good sources.

diggan

17 hours ago

Even if Google are not storing it, we can sleep safely as NSA's PRISM V2 probably got an archive of it too :) Albeit hard to acquire a dump of those archives, for now at least...

bwillard

18 hours ago

Officially, up to you if you believe they are following their policies, all of the companies have published statements on how long they keep their data after deletion (which customers broadly want to support recovery if something goes wrong).

- Google: active storage for "around 2 months from the time of deletion" and in backups "for up to 6 months": https://policies.google.com/technologies/retention?hl=en-US

- Meta: 90 days: https://www.meta.com/help/quest/609965707113909/

- Apple/iCloud: 30 days: https://support.apple.com/guide/icloud/delete-files-mm3b7fcd...

- Microsoft: 30-180 days: https://learn.microsoft.com/en-us/compliance/assurance/assur...

So if it ends up that they are storing data longer there can be consequences (GDPR, CCPA, FTC).

toyg

17 hours ago

TBH, I'd be surprised if they kept significant amounts around for longer, for the simple reason that it costs money. Yes, drives are cheap, but the electricity to keep them online for months and years is definitely not free, and physical space is not infinite. This is also why some of their services have pretty aggressive deletion policies (like recordings in MS Teams, etc).

demarq

17 hours ago

Same, I thought the free accounts were always trained on. Which in my opinion is reasonable since you could delete the data you didn’t want to keep on the service.

But including paid accounts and doing 5 year retention however is confounding.

A4ET8a8uTh0_v2

20 hours ago

I mean, I am sure there are individuals, who still believe in the basic value of the word within the framework of our civilization, but, having seen those words not just twisted beyond recognition to fit a specific idea, but simply ignored when they were no longer convenient, it would be a surprise now that a cynical stance is not more common.

The question is: how does that affect their choices. How much ends up being gated what previously would have ended up in the open?

Me: I am using a local variant ( and attempting to build something I think I can control better ).

layer8

17 hours ago

You wouldn’t have needed to assume if you had read their previous policy.

racl101

15 hours ago

Nope. I always assumed they were and acted accordingly.

stevenhuang

10 hours ago

... and your assumptions were wrong until now. Not sure that's much of a dunk as you think it seems

mrdependable

16 hours ago

This going to turn into one of those situations where we find out they trained on everyone whether they opted-out or not down the line. I want to keep using Claude, but I also don't want all the solutions I come up with to become common knowledge.

skybrian

15 hours ago

When has a company ignored an opt-out preference before? It sounds like you have something in mind.

tagawa

12 hours ago

Not the OP but…

https://www.jdsupra.com/legalnews/healthline-media-agrees-to...

"Healthline.com provided an opt-out mechanism, but it was misconfigured and Healthline failed to test it, resulting in data being shared with third parties even after consumers elected to opt out.”

https://www.bbc.com/news/technology-65772154

"The company agreed to pay the US Federal Trade Commission (FTC) after it was accused of failing to delete Alexa recordings at the request of parents.”

https://www.mediapost.com/publications/article/405635/califo...

"According to the [California Privacy Protection] agency, Todd Snyder told website visitors they could opt out of data sharing, but didn't actually allow them to do so for 40 days in late 2023 because its opt-out mechanism was improperly configured."

ukd1

15 hours ago

I think I'm fine with the whole getting better due to something helped it / co find with it. I'm not happy if it's directly 1:1 or attributed to me - chatham house rule for this would be great.

adastra22

2 hours ago

That’s impossible. You can’t anonymize data at scale.

treyd

15 hours ago

Why don't you want to share your insights? I agree doing it in a more direct way would be better than it leaking through AI training you don't control. But your phrasing seems stronger than that.

vb-8448

18 hours ago

So, I guess they run out of data to train on ...

I wonder on how much they can rely on the data and what kind of "knowledge" they can extract. I never give feedback and most time (let's say 5 out of 6) the result cc produce it simply wrong. How can they know the result is valuable or not?

debesyla

16 hours ago

Maybe they can use same method as Google does - if user clicked a link and didn't try to search again, it can assume the link had intended result.

So your silence can be used as a warmish signal that you were satisfied. (...or not. Depends on your usage fingerprint.)

rurp

16 hours ago

I expect that's a very weak signal. When I ask a question and get a completely wrong answer from Claude I usually drop the chat and look elsewhere.

jlarocco

18 hours ago

How can they know anything they train on is valuable?

At the end of the day it doesn't matter. You got the wrong answer and didn't complain, so why would they care?

vb-8448

10 hours ago

In general, I think any human generated content in pre-2022 is valuable because someone did some kind of validation (think about stack overflow answer with user confirming that a specific answer fixed their problem).

If they start to feed the next model with LLM generated crap, the overall performance will drop and instead of getting a useful answer 1 of 5 it will be 1 of 10(?) and probably a lot of us will cancel the subscription ... so in the end I think it matters.

awalsh128

2 hours ago

Wow, a 5 year retention. That seems so arbitrary and excessive. As someone who has been involved with privacy compliance, this is crazy. Does this apply for European users too? I don't know what the business justification for this would be but I am sure some lawyer could explain this better.

donohoe

16 hours ago

I just logged in on the iOS app and it immediately had a popup giving me the option to opt-out.

So yeah, annoying, but they handled it well.

robwwilliams

8 hours ago

Agreed. I actually want and need Claude and other LLMs to learn from my interactions with them. Exceedingly frustrating that LLMs do not yey hold a longterm memory or embedded trace of conversations per user. I have been asking Anthropic for this option as an opt-in for 6 months.

Sure, I understand the concerns many if you have.

But in my niche areas of cognitive research, genetics, and neurophilosophy I need Claude to be much smarter than it is now. I am happy to share what I know with Anthropic so that zi eventually have a better companion thinker.

indigoabstract

15 hours ago

Modern AI is built on data. Trusting that our conversations will not be used for training the model is a bit like giving a glutton their favourite food but making them promise not to eat it. Sure, why not.

I still expect that our conversations will not leave the premises (ie end up on the internet), because that would be something else, but other than that, I knew what I signed up for.

nbulka

15 hours ago

Don't normalize this. There are contractual obligations that we have to enforce in order to keep our privacy and humanity.

aatd86

16 hours ago

I don't know what they've been training on but I just canceled claude for the second time. Besides the numerous UI bugs of the web interface, incessant flickerings, it has gotten weirdly super condescending and negative in a way I hadn't observed neither in the past nor with other llms.

Probably that people accused it of being sycophantic and they have tried to adjust it but they didn't do it well. It'd rather criticize and make assumptions about my behavior rather than keeping it technical. Ha!

I prefer gemini. Seems a bit stressed always assuming that I might be frustrated by its answers which is also weird to assume but it is not straight disrespectful at least.

So I am back to testing chatgpt. I keep changing.

atkailash

4 hours ago

I’ve found Gemini argumentative and maybe condescending too.

Mistral feels like a good balance between haughtiness and sycophantism.

novok

15 hours ago

They need adjustable dials like they do in some API interfaces for power users. Like a sycophancy dial, a safety dial or "turn off the child lock permanently" like they have for microwaves.

MaJiX19

15 hours ago

Unbelievable. This is on par ethically with bad Meta decisions as far as privacy is concerned. What a dark pattern! What a mess! Look at this botched rollout... It's not ok to do this folks. This is literally what the modal looked like for me on chats that were already in progress. And NO I did not want them to train on this or ANY OF MY DATA UNDER A DIFFERENT CONTRACT.

Anthropic PR: "Ma'am, you opted IN to training on your therapy sessions and intellectual property and algorithms and salary and family history!" Don't you remember the modal???

The Modal: https://imgur.com/afqMi0Z

binary132

18 hours ago

$COMPANY reneged on their solemn pinky promise to not do the bad thing this time? Quelle surprise!

robwwilliams

8 hours ago

It is optional. I want the sharing option.

carbonbioxide

4 hours ago

If you're plan to keep improving LLM's is based on more training, then we have an AI hype problem.

andreagrandi

13 hours ago

You can opt out. It’s written quit quite clearly

distances

10 hours ago

I don't think that retention part was clear at all. It was separate from the opt-out. I assume I'm now opted out but that they'll keep the data for five years anyway.

robwwilliams

8 hours ago

You want them to flush your conversations on your own schedule. That I can understand. If you delete a conversation it should be DELETED.

nbulka

16 hours ago

For those who do not, or cannot, read this announcement prior to September 28th (think people in the hospital, traveling, missed an email ..) is this not a total breach of contract?

Legally, I don't understand how Anthropic's lawyers would have allowed this. Maybe I am just naively optimistic about these matters? I am a Max customer and I might leave! Talk about a "rug pull" ... and I considering moving to an inferior provider! Privacy is a fundamental human right. Please do better, we have not learned our lesson in tech or society because no one is facing any consequences.

t0mas88

4 hours ago

It only applies to new conversations, and they show a popup with this info in the app. Probably as a result of the legal team considering the options you listed.

darrmit

17 hours ago

I have never input anything into one of these tools that I wasn’t entirely comfortable with them using for training or any other reason. I just assumed it was happening.

kristopolous

12 hours ago

This needs to be a two way street ... one where we have programs that feed in piles of constant garbage unless they pay us for the privilege of clean data.

When things are free (in this case your input), they will get abused. Put a cost on it.

0xbadc0de5

19 hours ago

I kind of already assumed they were. I've got some pretty niche use-cases that I'd like to see the models get better at thinking their way through. I benefit from their training on my interactions. So I'll opt in. But I'll also recognize that others might not feel that way, so the services should provide a way for users to opt out.

robwwilliams

8 hours ago

Ditto: was delighted to see this as an option. Am I missing some details? I do not understand the comments in “rug pulls”.

carlhjerpe

15 hours ago

Thanks for informing me, I've opted out.

Generally I upvote chats which gives my chat to anthropic when I feel like sharing, I'll keep doing that like before with this opted out.

esafak

19 hours ago

"If you use Claude for Work, via the API, or other services under our Commercial Terms or other Agreements, then these changes don't apply to you."

troyvit

18 hours ago

This is what I don't get. It's so much simpler to use the LLMs with other tools (aider for me) that I don't understand why people are avoiding the API creating monthly accounts to begin with. Is it cheaper? Is Claude Code really that awesome or something? By not even looking at it I guess I never have to know, but from where I sit it seems like people are just putting themselves through a lot of b.s. in order to marry a start-up.

furyofantares

17 hours ago

> Is it cheaper?

"ccusage" is telling me I would have spent $2010.90 in the last month if I was paying via the API, rather than $200.

But also I do feel Claude Code is quite a bit better than other things I've used, when using the same model. I'm not sure why though, it's a fairly simple program with only a few prompts and only a few tools, it seems like others could catch up immediately by learning some lessons from it.

troyvit

16 hours ago

Daaaaamn that makes a lot of sense then. For my limited use it doesn't add up but clearly the more deeply embedded the tool is the more it makes sense.

data-ottawa

17 hours ago

According to Claude cost I get about 5x value in token cost by having a max subscription.

I upgraded after I hit the equivalent spend in API fees in a month.

joewhale

17 hours ago

i logged on and they did ask right away in a popup window.

javier_e06

20 hours ago

I use AI to solve problems, not to check the weather or deciding what to wear. As such it makes sense for AI to remember when it hits the nail on the head.

leetbulb

20 hours ago

Agreed. Typically I would be against something like this, but in this case, have it.

AlexandrB

19 hours ago

How do you feel about this data being used to target advertising at you in the inevitable rush to monetize these AI products?

christophilus

18 hours ago

I feel like that’s annoying, but it’s a drop in the bucket vs the current firehose of ads, and there’s a slim shot these ads might actually be interesting or relevant to me.

Anyway, I’ll block them like I do everything.

ath3nd

18 hours ago

Oh, sweet summer child, your SOLUTIONS will be trained on and will be given to others without your permission and knowledge.

But now that you bring up ads, I guarantee you that those will somehow be incorporated in Claude soon.

ath3nd

18 hours ago

And if you solve a novel problem, Claude will happily take your reasoning and give it to the next user trying to solve the same novel problem. Imagine if that was a guy working for the competition :)

racl101

15 hours ago

I always assumed they were. I've been masking / scrubbing my test data this whole time.

hkon

20 hours ago

With the amount of times Claude is visiting my websites I'd say they are very desperate for data.

djmips

9 hours ago

"If you choose not to allow us to use your chats and coding sessions to improve Claude, your chats will be retained in our back-end storage systems for up to 30 days."

ratg13

20 hours ago

I can understand training AIs on books, and even internet forums, but I can't help but think that training an AI on lots of dumb questions with probably an excessive amount of grammar and spelling errors will somehow make it smarter.

nrclark

19 hours ago

Depends on how you’re using the data. There’s a pretty strong correctness signal in the user behavior.

Did they rephrase the question? Probably the first answer was wrong. Did the session end? Good chance the answer was acceptable. Did they ask follow-ups? What kind? Etc.

vb-8448

18 hours ago

I'm used to doing the same task 4 or 5 times (different sessions, similar prompts), and most of the time the result is useless or completely wrong. Sometimes I go back and pick the first result, other time none of them, other time a mix of them. I'm wondering how can they extract value from this.

dudefeliciano

19 hours ago

> Did the session end? Good chance the answer was acceptable.

Or that the user just ragequit

victorbjorklund

13 hours ago

Doubt they feed everything in. They probably pick out a small subset of conversations for the training round.

mrweasel

19 hours ago

They train AI on Reddit and Stack Overflow questions, I can't see it getting any worse.

dahsameer

20 hours ago

> and even internet forums

i would consider internet forums also includes a lot of dumb questions

ratg13

19 hours ago

Agree, but people generally take a small pause before saying stuff online.

In 'private', people are less ashamed of their ignorance, and also know they can say gibberish and the AI will figure it out.

elitan

14 hours ago

i'm fine with it. happy to seed.

mushufasa

17 hours ago

isn't this just a change from opt-in to opt-out? will make a big difference but still gives individuals control of their data governance

boredatoms

15 hours ago

..and there goes our authorization to use it at work

gooob

20 hours ago

and now the LLM gets to observe itself, heh heh heh

wat10000

19 hours ago

Rather misleading title. Missing the important “unless you ask them not to” part. Sounds like a bit of a dark pattern to push you into accepting it and that’s not cool, but you do get a choice.

robwwilliams

8 hours ago

You are right. To the best of my knowledge there has notbeen a way to share conversations with Anthropic, and therefore with future versions of Claudes. There was no Opt-in option as far as I know.

Straighten me out if I am wrong.

I need this opt-in to improve the foundational model that they have trained. It is good, but not good enough.

internet2000

19 hours ago

I’m fine with that.

dudefeliciano

19 hours ago

you are fine with paying, 20, 90 or 200 euros a month AND having your data mined? i must be getting old...

jlarocco

18 hours ago

It's a tool that dependson data mining everything. The only surprise is that they weren't already data mining what people feed into it.

renewiltord

12 hours ago

You have it backwards. I'm paying $200/month and so I want the thing to reflect more what I want than what the general public wants. They'd better be mining my data, despite that being a term I haven't heard in over a decade.

Some people were upset that Google Maps would just take the data that contributors give it for free. My problem was different. I use Google Maps and I want a way to correct it. I don't want to be paid for this. I want the tool I'm using to be correctable by me. The more I pay for it, the more I want it to be editable by me. I don't want compensation. I want it to be better. And I can make it better.

It's sort of why we picked Kong at a different company. Open source core meant that we could edit stuff we didn't like. In fact, considering that we paid, we wanted them to upstream what we changed.

robwwilliams

8 hours ago

Yes, I share your take on this change—a way to improvements in many domains.

Agree that the fact that these improvements will accrue within in a proprietary for profit. But still a net positive fir my work.

Give me a FOSS LLM with Claude 4 Sonnet performance and a 1 million token context and I will work even harder toward improvements in my areas of biological NIH-funded research.

bethekidyouwant

18 hours ago

what are you doing with the data? What is your legacy going to be other than the data that you leave to be mined? Do you just not want something else to benefit from something that has no benefit to you? If so, why?

dudefeliciano

17 hours ago

There is a myriad of reasons i may not want my data to be used. Maybe I am working on proprietary systems, maybe I am using Claude as a psychotherapist, maybe I use it as a tax advisor, the list goes on. Is it unrealistic to think that data may be extrapolated and connected to me in the future?

godshatter

13 hours ago

Use gpt4all or one of the other locally-hosted AI chatbots. Download a model and see if it works for you. It won't be as good as the latest models out there presumably, but at least you're not sending any chat data anywhere.

dudefeliciano

13 hours ago

I'm really disheartened by all this shrugging of shoulders for a company mining user data. On hackernews... Anthropic initally positioned itself as a company with privacy and data protection as a priority, and had a real chance to claim moral high ground compared to its competitors.

godshatter

11 hours ago

I'm not shrugging my shoulders about this. I already don't use online LLMs for anything important because of privacy concerns already (i.e. I assume anything that lands on their server is fair game for them to do what they like with it). If you do have privacy concerns about your data, host it locally. I have played around with self-hosted LLMs and they work for what I need, which is mostly playing around with them for entertainment.

I'm careful about what data of mine lands on someone else's server, so I'm not a fan of this even without the dark patterns.

robwwilliams

8 hours ago

Is there a reason you cannot opt out? Or don’t you trust Anthropic’s opt-out implementation.

tiahura

16 hours ago

Maybe I am working on proprietary systems, maybe I am using Claude as a psychotherapist, maybe I use it as a tax advisor, the list goes on.

Then use the business version.

dudefeliciano

16 hours ago

lol wtf kind of reply is that? Maybe relevant for the proprietary system part. I guess you would be fine with a tax advisor or therapist charging you more for a "gossip free service" where they will not disclose your personal information if you just pay more.

To be clear, i don't use claude for any of those purposes, it's the principle i am talking about.

dudefeliciano

17 hours ago

Oh and i forgot the most important thing, I am paying good money for this service, now they also mine my data? I grew up in a time where "if it's free, you're the product". I guess that's not even the case anymore, if you pay, you're still the product...

victorbjorklund

13 hours ago

That was never the case. You paid money for food in the store? Tracked and data mined. You pay for spotify? Tracked and data mined. Only thing not tracked are things where it is impossible/worthless to track it

dudefeliciano

13 hours ago

i guess it's super cool then shrug. There is also a difference between tracking the music you listen to and even the food you buy, and "conversations" you have that may include much more private and sensitive topics.

diamond559

17 hours ago

"Training" is now a euphemism for stealing. Guess I can't write any production level code w/ this...

tiahura

16 hours ago

Some people would say that since the owner isn't being deprived of anything, it's not stealing.

kukkeliskuu

13 hours ago

Let's say that I use LLM to develop novel software called X. Then my work is used to train the model. Then somebody uses the model to recreate a copy of the of the software X by prompt "create software that works just like X". My novel software is no longer unique. So how come I have not been deprived of anything?

flerchin

19 hours ago

Maybe a value to users if done correctly. The way it is right now, you can't teach the model anything. When it gets something wrong, it will probably get the same thing wrong again in another chat.

phallus

19 hours ago

That's not how LLMs work.

some_random

17 hours ago

Title is misleading, they're now opt-out rather than opt-in to your data being used for training. All you have to do is flip a single switch in the options to turn it off, I don't understand why everyone is treating this as being such a big deal.

Edit: I just logged in to opt out, they presented me with the switch directly. It was two clicks.

drdaeman

15 hours ago

[Edit: turns out I've got it wrong, and 5-year retention only said to apply to the data they're allowed to train on. This changes things for me.]

Personally, I don't mind training, as long as I have a say on the matter - and they have a switch for this. Opt-out is not exactly cool, but I've got the popup in my face, almost a month before the changes, and that's respectful enough for me.

This said, I've just canceled my subscription because this new 5-year mandatory data retention is a deal breaker for me. I don't mind 30 or 60 days or even 90 days - I can understand the need to briefly persist the data. But for anything long-term (and 5 years is effectively permanent) I want to be respected with having a choice, and I'm provided none except for "don't use".

A shame, but fortunately they're not a monopoly.

some_random

15 hours ago

Data retention appears to be predicated on opting in to allowing training. If you don't opt in, they retain it for the same 30 days they were already retaining it for. https://www.anthropic.com/news/updates-to-our-consumer-terms

drdaeman

15 hours ago

Oh! Thank you!

That popup was confusing as hell then, because I've read and understood it as two separate points: I've got it that they're making training opt-out, and that they're changing data retention to 5 years, independent of each other. I got upset over this, and haven't really researched into the nuances - and turns out I've got it all wrong.

Appreciate your comment, it's really helpful!

I hope they change the language to make it clear 5 years only applies to the chats they're allowed to train models on.

(Weirdly, I can't find the word "years" anywhere on their Privacy Policy, and the only instance on the Consumer Terms of Service pages is about being of legal age over 18 years old.)

rkomorn

17 hours ago

I think any switch from opt-out-by-default to opt-in-by-default sucks, especially when it has no clear immediate benefit to the person being opted in.

Disclaimer: not a Claude user (not even a prospective one)

latexr

17 hours ago

> any switch from opt-out-by-default to opt-in-by-default sucks

It’s the reverse. This was opt-in and is now opt-out. Opt means choose so when “the default is opt-in” it means the option is “no” by default and you have the option to make it “yes”.

rkomorn

17 hours ago

> they're now opt-out rather than opt-in to your data being used for training

This is what the comment I was replying to said. I took that to mean "you have to opt out (ie you're opted in by default)".

stkdump

14 hours ago

The meaning of the term "opt-in" is that it is off by default and has to be manually enabled. "opt-out" means it is on by default and you have to manually turn it off. "opt-in-by-default" or "opted in by default" are needlessly confusing.

rkomorn

13 hours ago

True, yes. Totally agree with you on the fundamental definition of opt-in vs opt-out.

You can also have a checkbox that says "I consent to having my data used for training", which would look like "opting in", and it could be true by default.

Or you can have a checkbox that says "Leave my data out of your training set", which would look like "opting out", and which could be unchecked default.

Technically, they're both "opt-out", but I've seen enough examples (intentionally confusing and arguably "dark patterns") that I personally don't really consider "it's opt-in" to be a complete statement anymore.

Edit: I'll add that, in the comment I was replying to, it very much looked like you had to go to a settings page in order to opt-out, which I think is entirely reasonably described as having been opted-in by default. Here's what they had written:

> All you have to do is flip a single switch in the options to turn it off

And I actually think "opted-in by default" is valid and calls out cases where it looks like you consent, but that decision was made for you. Although in this case I think I've seen other comments that describe the UX differently, but my comment was more of a general comment than about this particular flow.

currymj

16 hours ago

i think skepticism is healthy, but they've handled this in a fairer way than any other online product i've used before.

they gave me a popup to agree to the ToS change, but I can ignore it for a month and still use the product. In the popup, they clearly explained the opt-out switch, which is available in the popup itself as well as in the settings.

some_random

17 hours ago

I don't think it's good, but people both here and on reddit are acting like this is some Great Betrayal when it's just a single switch that they prominently present to you. If they're going to make this change, this is exactly how I'd want them to do it.

latexr

17 hours ago

> If they're going to make this change

Feels like the complaint is precisely that people don’t want them to make this change.

> this is exactly how I'd want them to do it.

Sees naive to believe it will always be done like this, especially for new users.

some_random

17 hours ago

First off, I don't think going into the settings and flipping a toggle switch once is a huge burden on those who want to use a service privately. But more importantly, some of the comments here are so hysterical I have to assume that they read the title and jumped to the conclusion that you cannot opt out anymore without a business account.

robwwilliams

8 hours ago

Odd that you are being down-voted for pointing out the easy opt-out option. I need this opt-in feature. I suppose the polarity of action could be a factor.

boredatoms

15 hours ago

No thats BS, lots of people wont know the default got flipped

some_random

14 hours ago

There's a huge popup the first time you log in now

ljosifov

18 hours ago

Excellent. What were they waiting for up to now?? I thought they already trained on my data. I assume they train, even hope that they train, even when they say they don't. People that want to be data privacy maximalists - fine, don't use their data. But there are people out there (myself) that are on the opposite end of the spectrum, and we are mostly ignored by the companies. Companies just assume people only ever want to deny them their data.

It annoys me greatly, that I have no tick box on Google to tell them "go and adapt models I use on my Gmail, Photos, Maps etc." I don't want Google to ever be mistaken where I live - I have told them 100 times already.

This idea that "no one wants to share their data" is just assumed, and permeates everything. Like soft-ball interviews that a popular science communicator did with DeepMind folks working in medicine: every question was prefixed by litany of caveats that were all about 1) assumed aversion of people to sharing their data 2) horrors and disasters that are to befall us should we share the data. I have not suffered any horrors. I'm not aware of any major disasters. I'm aware of major advances in medicine in my lifetime. Ultimately the process does involve controlled data collection and experimentation. Looks a good deal to me tbh. I go out of my way to tick all the NHS boxes too, to "use my data as you see fit". It's an uphill struggle. The defaults are always "deny everything". Tick boxes never go away, there is no master checkbox "use any and all of my data and never ask me again" to tick.

koolba

17 hours ago

> It annoys me greatly, that I have no tick box on Google to tell them "go and adapt models I use on my Gmail, Photos, Maps etc." I don't want Google to ever be mistaken where I live - I have told them 100 times already.

As we’ve seen LLMs be able to fully regenerate text from their sources (or at least close enough), aren’t you the least bit worried about your personal correspondence magically appearing in the wild?

simonw

18 hours ago

If you have an API key for a paid service, would you be OK with someone asking ChatGPT or VS Code Copilot for an API key for that service and getting yours, which they then use to rack up bills that you have to pay?

JohnMakin

16 hours ago

The fact you are not aware of abuse, or abuse has not yet happened to you, does not mean it isn't a problem for you.

> The defaults are always "deny everything".

This is definitely not true for a massive amount of things, I'm unsure how you're even arriving at this conclusion.

ljosifov

15 hours ago

Maybe in the US. In the UK, I have found obstacles to data sharing codified in the UK law frustrating. I'm reasonably sure some people will have died because of this, that would not have died otherwise. "Otherwise" case being - if they could communicate with the NHS, similarly (via email, whatsapp) to how they communicate in their private and professional lives.

Within the UK NHS and UK private hospital care, these are my personal experiences.

1) Can't email my GP to pass information back-and-forth. GP withholds their email contact, I can't email them e.g. pictures of scans, or lab work reports. In theory they should have those already on their side. In practice they rarely do. The exchange of information goes sms->web link->web form->submit - for one single turn. There will be multiple turns. Most people just give up.

2) MRI scan private hospital made me jump 10 hops before sending me link, so I can download my MRI scans videos and pictures. Most people would have given up. There were several forks in the process where in retrospect could have delayed data DL even more.

3) Blood tests scheduling can't tell me back that scheduled blood test for a date failed. Apparently it's between too much to impossible for them to have my email address on record, and email me back that the test was scheduled, or the scheduling failed. And that I should re-run the process.

4) I would like to volunteer my data to benefit R&D in the NHS. I'm a user of medicinal services. I'm cognisant that all those are helping, but the process of establishing them relied on people unknown to me sharing very sensitive personal information. If it wasn't for those unknown to me people, I would be way worse off. I'd like to do the same, and be able to tell UK NHS "here are, my lab works reports, 100 GB of my DNA paid for by myself, my medical histories - take them all in, use them as you please."

In all cases vague mutterings of "data protection... GDPR..." have been relayed back as "reasons". I take it's mostly B/S. Yes there are obstacles, but the staff could work around if they wanted to. However there is a kernel of truth - it's easier for them to not try to share, it's less work and less risk, so the laws are used as a cover leaf. (in the worst case - an alibi for laziness.)

SantalBlush

18 hours ago

If only you were just giving them your own data. In reality, you're giving them data about your friends, relatives, and coworkers without their consent. Let's stop pretending there is any way to opt out by simply not using these companies' services; it isn't true.

p3rls

17 hours ago

I think the real frustrating part is that they're using your data, scanning every driver's license etc that comes onto the google play store-- and there's still scammers etc using official google products that people catch everyday on twitter now that scambaiting is becoming a popular pastime.

otikik

18 hours ago

“Claude, please write and commit this as if you were ljosifov. Yes, please use his GitHub token, thank you”

ardit33

18 hours ago

This is a problem for folks with sensitive data, and also for coorporate users who don't want their data being used for it due to all kinds of liability issues.

I am sure they will have a coorporate carve out, otherwise it makes them unusuable for some large corps.