ankit219
10 hours ago
The difference is implementation comes down to business goals more than anything.
There is a clear directionality for ChatGPT. At some point they will monetize by ads and affiliate links. Their memory implementation is aimed at creating a user profile.
Claude's memory implementation feels more oriented towards the long term goal of accessing abstractions and past interactions. It's very close to how humans access memories, albeit with a search feature. (they have not implemented it yet afaik), there is a clear path where they leverage their current implementation w RL posttraining such that claude "remembers" the mistakes you pointed out last time. It can in future iterations derive abstractions from a given conversation (eg: "user asked me to make xyz changes on this task last time, maybe the agent can proactively do it or this was the process last time the agent did it").
At the most basic level, ChatGPT wants to remember you as a person, while Claude cares about how your previous interactions were.
Workaccount2
10 hours ago
Don't fool yourself into thinking Anthropic won't be serving up personalized ads too.
GuB-42
6 hours ago
Anthropic seems to want to make you buy a subscription, not show you ads.
ChatGPT seems to be more popular to those who don't want to pay, and they are therefore more likely to rely on ads.
forgotoldacc
30 minutes ago
In the 2020s, subscriptions don't preclude showing ads. Companies will milk money in as many ways as they can
cantor_S_drug
3 hours ago
They might be coming from different directions. But these things, as often they do, will converge. Too big of a market to leave.
chii
2 hours ago
and netflix used to think they dont want to show ads either.
serf
an hour ago
as a former paying user it felt more like they were buying my subscription with a decent product so that they could sell their business prospects to investors by claiming a high subscription count.
I have never encountered such bad customer service anywhere -- and at 200 bucks a month at that.
mrheosuper
3 hours ago
so ChatGPT will become "saleman". And i do not trust any saleman.
ankit219
9 hours ago
My conjecture is that their memory implementation is not aimed at building a user profile. I don't know if they would or would not serve ads in the future, but it's hard to see how the current implementation helps them in that regard.
cj
8 hours ago
> I don't know if they would or would not serve ads in the future
There are 2 possible futures:
1) You are served ads based on your interactions
2) You pay a subscription fee equal to the amount they would have otherwise earned on ads
I highly doubt #2 will happen. (See: Facebook, Google, twitter, et al)
Let’s not fool ourselves. We will be monetized.
And model quality will be degraded to maximize profits when competition in the LLM space dies down.
It’s not a pretty future. I wouldn’t be surprised if right now is the peak of model quality, etc. Peak competition, everyone is trying to be the best. That won’t continue forever. Eventually everyone will pivot their priority towards monetization rather than model quality/training.
Hopefully I’m wrong.
fluidcruft
7 hours ago
But aren't we only worth something like $300/year each to Meta in terms of ads? I remember someone arguing something like that when the TikTok ban was being passed into law... essentially the argument was that TikTok was "dumping" engagement at far below market value (at something like $60/year) to damage American companies. That was something the argument I remember anyway.
majormajor
6 hours ago
Here is some old analysis I remember seeing at the time of Hulu ads vs no-ads plans: https://ampereanalysis.com/insight/hulus-price-drop-is-a-wis...
They dropped the price $2/mo on their with-ads plan to make a bigger gap between the no-ads plan and the ads plan, and the analyst here looks at their reported ad revenue and user numbers to estimate $12/mo per user from ads.
Whether Meta across all their properties does more than $144/yr in ads is an open question; long-form video ads are sold at a premium but Facebook/IG users see a LOT of ads across a lot of Meta platforms. The biggest advantage in ad-$-per-user Hulu has is that it's US-only. ChatGPT would also likely be considered premium ad inventory, though they'd have a delicate dance there around keeping that inventory high-value, and selling enough ads to make it worthwhile, without pissing users off too much.
Here they estimate a much lower number for ad revenue per Meta user, like $45 bucks a year - https://www.statista.com/statistics/234056/facebooks-average... - but that's probably driven disproportionately by wealth users in the US and similar countries compared to the long tail of global users.
One problem for LLM companies compared to media companies is that the marginal cost of offering the product to additional users is quite a bit higher. So business models, ads-or-subscription, will be interesting to watch from a global POV there.
One wonders what the monetization plan for the "writing code with an LLM using OSS libraries and not interested in paying for enterprise licenses and such" crowd will be. What sort of ads can you pull off in those conversations?
cj
7 hours ago
If that’s the case, we have an even bigger problem on our hands. How will these companies ever be profitable?
If we’re already paying $20/mo and they’re operating at a loss, what’s the next move (assuming we’re only worth an extra $300/yr with ads?)
The math doesn’t add up, unless we stop training new models and degrade the ones currently in production, or have some compute breakthrough that makes hardware + operating costs an order of magnitudes cheaper.
rrrrrrrrrrrryan
6 hours ago
OpenAI has already started degrading their $20/month tier by automatically routing most of the requests to the lightest free-tier models.
We're very clearly heading toward a future where there will be a heavily ad-supported free tier, a cheaper (~$20/month) consumer tier with no ads or very few ads, and a business tier ($200-$1000/month) that can actually access state of the art models.
Like Spotify, the free tier will operate at a loss and act as a marketing funnel to the consumer tier, the consumer tier will operate at a narrow profit, and the business tier for the best models will have wide profit margins.
lodovic
an hour ago
I find that hard to believe. As long as we have open weight models, people will have an alternative to these subscriptions. For $200 a month it is cheaper to buy a GPU with lots of memory or rent a private H200. No ads and no spying. At this point the subscriptions are mainly about the agent functionality and not so much the knowledge in the models themselves.
willcannings
an hour ago
Most? Almost all my requests to the "Auto" model end up being routed to a "thinking" model, even those I think ChatGPT would be able to answer fine without extra reasoning time. Never say never, but right now the router doesn't seem to be optimising for cost (at least for me), it really does seem to be selecting a model based on the question itself.
furyofantares
3 hours ago
> If we’re already paying $20/mo and they’re operating at a loss
I'm quite confident they're not operating at a loss on those subscriptions.
fluidcruft
5 hours ago
Well to make things worse I was pretty convinced those were faked numbers to push the TilTok ban forward. I really doubt Meta and Google are each taking in this much per user. But my point is more that even if it were that high,
ChatGPT isn't going to capture all the engagement. And even then I don't know whether $300 is much particularly after subtracting operating overhead. I'm just saying I have trouble believing there's gold to be had at the end of this LLM ad rainbow. People just seem to throw out ideas like "ads!" as if it's a sure fire winning lottery ticket or something.
taneq
2 hours ago
3) You pay a subscription fee, and are force-fed ads anyway.
hbarka
2 hours ago
Imagine a model where a user can earn “token allowances” through some kind of personal contribution or value add.
dotancohen
9 hours ago
Though in general I like the idea of personal ads for products (NOT political ads), I've never seen an implementation that I felt comfortable with. I wonder if Arthropic might be able to nail that. I'd love to see products that I'm specifically interested in, so long as the advertisement itself is not altered to fit my preferences.
Terr_
8 hours ago
> Though in general I like the idea of personal ads for products (NOT political ads), I've never seen an implementation that I felt comfortable with.
No implementation will work for very long when the incentives behind it are misaligned.
The most important part of the architecture is that the user controls it for the user's best interests.
lostdog
9 hours ago
There is no such thing as a good flow for showing sponsored items in an LLM workflow.
The point of using an LLM is to find the thing that matches your preferences the best. As soon as the amount of money the LLM company makes plays into what's shown, the LLM is no longer aligned with the user, and no longer a good tool.
agar
2 hours ago
Same can be said for search. And your statement is provably correct, depending on the definition of "good tool."
But it's not only money's influence on the company, it's also money's influence on the /data/ underlying the platform that undermines the tool.
Once financial incentives are in place, what will be the AI equivalent of review bombing, SEO, linkjacking, google bombing, and similar bad behaviors that undermine the quality of the source data?
zer00eyz
9 hours ago
Claude: "What is my purpose?"
Anthropic: "You serve ad's."
Claude: "Oh, my god."
Jest asside, every paper on alignment wrapped in the blanket of safety is also a moving toward the goal of alignment to products. How much does a brand pay to make sure it gets placement in, say, GPT6? How does anyone even price that sort of thing (because in theory it's there forever, or until 7 comes out)? It makes for some interesting business questions and even more interesting sales pitches.
rubidium
5 hours ago
I’ll be concerned when ex-Yelp “growth strategists” start showing up at OpenAI and leverage the same extortionist technics.
rblatz
2 hours ago
The models aren’t static, we have to build validation sets to measure model drift and modify our prompts to compensate.
Yoric
8 hours ago
Could be part of a LORA or some other kind of plug-in refinement.
resters
4 hours ago
Suppose the user uses an LLM for topics a, b, and c quite often, and d, e and f less often. Suppose b, c, and f are topics that OpenAI could offer interruption ads (full screen, 30 seconds or longer commercials) and most users would sit through it and wait for the response.
All that is needed to do that is to analyze topics.
Now suppose that OpenAI can analyze 1000 chats and coding sessions and its algorithm determines that it can maximize revenue by leading the user to get a job at a specific company and then buy a car from another company. It could "accomplish" this via interruption ads or by modifying the quality or content of its responses to increase the chances of those outcomes happening.
While both of these are in some way plausible and dystopian, all it takes is DeepSeek running without ads and suddenly the bar for how good closed source LLMs have to be to get market share is astronomically higher.
In my view, LLMs will be like any good or service, users will pay for quality but differnet users will demand different levels of quality.
Advertising would seemingly undermine the credibility of the AI's answers, and so I think full screen interruption ads are the most likely outcome.
spongebobstoes
9 hours ago
why do you see a "clear directionality" leading to ads? this is not obvious to me. chatgpt is not social media, they do not have to monetize in the same way
they are making plenty of money from subscriptions, not to count enterprise, business and API
rrrrrrrrrrrryan
6 hours ago
Altman has said numerous times that none of the subscriptions make money currently, and that they've been internally exploring ads in the form of product recommendations for a while now.
simianwords
an hour ago
Source? First time I’ve heard of it.
biophysboy
8 hours ago
Presumably they would offer both models (ads & subscriptions) to reach as many users as possible, provided that both models are net profitable. I could see free versions having limits to queries per day, Tinder style.
0xCMP
9 hours ago
One has a more obvious route to building a profile directly off that already collected data.
And while they are making lots of revenue even they have admitted on recent interviews that ChatGPT on it's own is still not (yet) breakeven. With the kind of money invested, in AI companies in general, introducing very targeted Ads is an obvious way to monetize the service more.
simianwords
an hour ago
This is incorrect understanding of unit economics. They are not breaking even only because of reinvestment into r and d.
ankit219
8 hours ago
The router introduced in gpt-5 is probably the biggest signal. A router, while determining which model to route query, can determine how much $$ a query is worth. (Query here is conversation). This helps decide the amount of compute openai should spend on it. High value queries -> more chances of affiliate links + in context ads.
Then, the way memory profile is stored is a clear way to mirror personalization. Ads work best when they are personalized as opposed to contextual or generic. (Google ads are personalized based on your profile and context). And then the change in branding from being the intelligent agent to being a companion app. (and hiring of fidji sumo). There are more things here, i just cited a very high level overview, but people have written detailed blogs on it. I personally think affiliate links they can earn from aligns the incentive for everyone. They are a kind of ads, and thats the direction they are marching towards .
tedsanders
5 hours ago
I work at OpenAI and I'm happy to deny this hypothesis.
Our goal for the router (whether you think we achieved it or not) was purely to make the experience smoother and spare people from having to manually select thinking models for tasks that benefit from extra thinking. Without the router, lots of people just defaulted to 4o and never bothered using o3. With the router, people are getting to use the more powerful thinking models more often. The router isn't perfect by any means - we're always trying to improve things - but any paid user who doesn't like it can still manually select the model they want. Our goal was always a smoother experience, not ad injection or cost optimization.
ankit219
3 hours ago
Hi! Thank you for the clarification. I was just saying it might be possible in the future (in a way you can determine how much compute - which model - a specific query needs today as well). And the experience has definitely improved w router so kudos on that. I don't know what the final form factor of ads would be (i imagine it turning out to be a win win win scenario than say you show ads at the expense of quality. This is a google level opportunity to invent something new) just that it seems from the outside you guys are preparing for monetization by ads given the large userbase you have and virtually no competition at chatgpt usage level.
dweinus
8 hours ago
> they are making plenty of money from subscriptions, not to count enterprise, business and API
...except that they aren't? They are not in the black and all that investor money comes with strings