mrob
7 days ago
Not open source. Even if we accept model weights as source code, which is highly dubious, this clearly violates clauses 5 and 6 of the Open Source Definition. It discriminates between users (clause 5) by refusing to grant any rights to users in the European Union, and it discriminates between uses (clause 6) by requiring agreement to an Acceptable Use Policy.
EDIT: The HN title was changed, which previously made the claim. But as HN user swyx pointed out, Tencent is also claiming this is open source, e.g.: "The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry".
ronsor
7 days ago
I will again ask the obligatory question: are model weights even copyrightable? And if not, does the "license" still matter?
parl_match
7 days ago
I doubt there will be a satisfactory answer for a long time.
killjoywashere
7 days ago
How's that NYTimes vs OpenAI lawsuit going? Last I can find is things are hung up in discovery: OpenAI has requested potentially a century of NYTimes reporters' notes.
https://news.bloomberglaw.com/ip-law/openais-aggressive-cour...
bdowling
7 days ago
Half a century worth of reporters’ notes might be some valuable training data.
neilv
7 days ago
> The AI company asked Judge Sidney H. Stein of the US District Court for the Southern District of New York to step in and compel the Times to produce reporters’ notes, interview memos, and other materials for each of the roughly 10 million contested articles the publication alleges were illegally plugged into the company’s AI models. OpenAI said it needs the material to suss out the copyrightability of the articles. The Times quickly fired back, calling the request absurd.
Can any lawyer on here defend OpenAI's request? Or is the article not characterizing it well in the quote?
warkdarrior
7 days ago
(IANAL)
Model weights could be treated the same way phone books, encyclopedias, and other collections of data are treated. The copyright is over the collection itself, even if the individual items are not copyrightable.
TMWNN
7 days ago
>phone books, encyclopedias, and other collections of data are treated
Encyclopedias are copyrightable. Phone books are not.
skissane
7 days ago
> Encyclopedias are copyrightable. Phone books are not.
It depends on the jurisdiction. The US Supreme Court ruled that phone books are not copyrightable in the 1991 case Feist Publications, Inc., v. Rural Telephone Service Co.. However, that is not the law in the UK, which generally follows the 1900 House of Lords decision Walter v Lane that found that mere "sweat of the brow" is enough to establish copyright – that case upheld a publisher's copyright on a book of speeches by politicians, purely on the grounds of the human effort involved in transcribing them.
Furthermore, under its 1996 Database Directive, the EU introduced the sui generis database right, which is a legally distinct form of intellectual property from copyright, but with many of the same features, protecting mere aggregations of information, including phone directories. The UK has retained this after Brexit. However, EU directives give member states discretion over the precise legal mechanism of their implementation, and the UK used that discretion to make database rights a subset of copyright – so, while in EU law they are a technically distinct type of IP from copyright, under UK law they are an application of copyright. EU law only requires database rights to have a term of 15 years.
Do not be surprised if in the next couple of years the EU comes out with a "AI Model Weights Directive" establishing a "sui generis AI model weights right". And I'm sure US Congress will be interested in following suit. I expect OpenAI / Meta / Google / Microsoft / etc will be lobbying for them to do so.
ronsor
7 days ago
Encyclopedias may be collections of facts, but the writing is generally creative. Phone books are literally just facts. AI models are literally just facts.
margalabargala
7 days ago
> AI models are literally just facts.
Are they, or are they collections of probabilities? If they are probabilities, and those probabilities change from model to model, that seems like they might be copywritable.
If Google, OpenAI, Facebook, and Anthropic each train a model from scratch on an identical training corpus, they would wind up with four different models that had four differing sets of weights, because they digest and process the same input corpus differently.
That indicates to me that they are not a collection of facts.
ronsor
7 days ago
The AI training algorithms are deterministic given the same dataset, same model architecture, and same set of hyperparameters. The main reasons the models would not be identical is due to differing random seeds and precision issues. The differences would not be due to any creative decisions.
margalabargala
7 days ago
Sure, but they don't all use the same algorithm, the same hyperparameters, etc.
At some point, with sufficiently many hyperparameters being chosen, that starts becoming a creative decision. If 5 parameters are available and all are left at the default, then no, that's not creative. If there are ten thousand, and all are individually tweaked to yield what the user wants, is that creative?
Not to mention all of these companies write their own algorithms to do the training which can introduce other small differences.
roywiggins
7 days ago
What if I train an AI model on exactly one copyrighted work and all it does it spit that work back out?
eg if I upload Marvels_Avengers.mkv.onnx and it reliably reproduces the original (after all, it's just a fact that the first byte of the original file is OxF0, etc)
bdowling
7 days ago
A work that is “substantially similar” to a copyrighted work infringes that work, under US law, no matter how it was produced. (Note: Some exceptions apply and you have to read a lot of cases to get an idea of what courts find “substantially similar” .)
HWR_14
7 days ago
> no matter how it was produced
IIRC, this is wrong. Independent creation is a valid (but almost impossible to prove) defense in US copyright law.
This example is not an independent creation, but your reasoning seems wrong.
bdowling
5 days ago
I wrote "some exceptions apply" to try to avoid getting into the weeds, but yes, independent creation is an exception. Other exceptions include out-of-term works, public domain, Mise-en-scène (e.g., stock characters), fair use (a huge can of worms), etc.
ronsor
7 days ago
If the sole purpose of your model is to copy a work, then that's copyright infringement.
PeterStuer
6 days ago
If the sole purpose of your model is to copy a work, then there would be far easier, cheaper and more reliable techniques to achieve that.
Judge the output, not the system.
roywiggins
7 days ago
Oh, in this case, the model can either reproduce the work exactly, or it can play tic-tac-toe depending on how you prompt it.
ronsor
7 days ago
We can change "sole purpose" to "primary purpose", and I'd argue something that happens 50% of the time counts as a primary purpose.
PittleyDunkin
7 days ago
Who gives a damn about copyright when this is clearly profiting off of someone else's work without compensation? Sometimes the law is inadequate and that's ok—the law just needs to change.
dplavery92
7 days ago
The title of Tencent's paper [0] as well as their homepage for the model [1] each use the term "Open-Source" in the title, so I think they are making the claim.
[0] https://arxiv.org/pdf/2411.02265 [1] https://llm.hunyuan.tencent.com/
vanguardanon
7 days ago
What is the reason for restrictions in the EU? Is it due to some EU regulations?
ronsor
7 days ago
Most likely yes. I don't think companies can be blamed for not wanting to subject themselves to EU regulations or uncertainty.
Edit: Also, if you don't want to follow or deal with EU law, you don't do business in the EU. People here regularly say if you do business in a country, you have to follow its laws. The opposite also applies.
troupo
7 days ago
[flagged]
ronsor
7 days ago
I will address both points:
1. No one is training on users' bank details, but if you're training on the whole Internet, it's hard to be sure if you've filtered out all PII, or even who is in there.
2. This isn't happening because no one has time for more time-wasting lawsuits.
troupo
7 days ago
> No one is training on users' bank details, but if you're training on the whole Internet
Tencent has access to more than just bank accounts.
In the West there's Meta that this year opted everyone in their platform into training their AI.
> This isn't happening because no one has time for more time-wasting lawsuits.
No, this isn't happening because a) their training data is, without fail, trained on material they shouldn't have willy-nilly access to and b) because they want to pretend to be open source without being opensource
bilbo0s
7 days ago
??
Doesn't that mean if they used data created by, (or even the data of), anyone in the EU, that they would want to not release that model in the EU?
This sounds like "if an EU citizen created, or has data referenced, in any piece of the data you trained from then..."
Which, I mean, I can kind of see why US and Chinese companies prefer to just not release their models in the EU. How could a company ever make a guarantee satisfying those requirements? It would take a massive filtering effort.
em500
7 days ago
This seems to mirror the situation where US financial regulations (FATCA) are seen as such a hassle to deal with for foreign financial institutions that they'd prefer to just not accept US citizens as customers.
troupo
7 days ago
> that they would want to not release that model in the EU
They don't release that model in the EU, that's correct
> This sounds like "if an EU citizen created, or has data referenced, in any piece of the data you trained from then..."
Yes, and that should be the default for any citizen of any country in the world.
Instead you have companies like Meta just opting everyone in to their AI training dataset.
> I can kind of see why US and Chinese companies prefer to just not release their models in the EU.
Companies having unfettered unrestricted access to any and all data they want is not such a good thing as you make it out to be
warkdarrior
7 days ago
> > This sounds like "if an EU citizen created, or has data referenced, in any piece of the data you trained from then..."
> Yes, and that should be the default for any citizen of any country in the world.
This is a completely untenable policy. Each and every piece of data in the world can be traced to one or more citizens of some country. Actively getting permission for every item is not feasible for any company, no matter the scale of the company.
andyferris
7 days ago
I think that’s kinda the point that is being made.
Technolgy-wise, it is clearly feasible to aggregate the data to train an LLM and to release a product on that.
It seems that some would argue that was never legally a feasible thing to do, based on the training data being impossible to use legally. So, it is the existence of many of these LLMs that is (legally) untenable.
Whether valid or not the point may be mute because, like Uber, if the laws actually do forbid this use, they will change as necessary to accommodate the new technology. Too many “average voters” like using things such as ChatGPT and it’s not a hill politicians will be willing to die on.
troupo
7 days ago
> Actively getting permission for every item is not feasible for any company, no matter the scale of the company.
There's a huge amount of data that:
- isn't personal data
- isn't copyrighted
- isn't otherwise protected
You could argue if that is enough data, but neither you nor corporations argue that. You just go for "every single scrap of data on the planet must be made accessible to supranational trillion-dollar corporations, without limits, now and forever"
user
7 days ago
blueblimp
7 days ago
In Meta's case, the problem is that they had been given the go-ahead by the EU to train on certain data, and then after starting training, the EU changed its mind and told them to stop.
GaggiX
7 days ago
They probably trained on data protected by privacy laws, similar to Meta.
karaterobot
7 days ago
Hmm, in fairness I don't see where Tencent is claiming this is open source (at least in this repo; I haven't checked elsewhere). The title of the HN post does make the claim, and that may be controversial or simply incorrect.
swyx
7 days ago
readme: https://github.com/Tencent/Tencent-Hunyuan-Large
> "By open-sourcing the Hunyuan-Large model"
karaterobot
7 days ago
Yeah, I was incorrect above. I just didn't search for the hyphen.
kaliqt
7 days ago
I agree, however, Meta is also guilty of this crime as well.
PittleyDunkin
7 days ago
[flagged]
foooorsyth
7 days ago
[flagged]
mrob
7 days ago
The term "open source" had no significant use to refer to software before the Open Source Initiative started promoting it. Previously, it was only intelligence industry jargon, meaning "publicly available information", which includes software that fails your "can read the source code" test. "Source" was used in the journalistic sense, not as in "source code". The correct term for software that passes your test but does not meet the Open Source Definition is "source available".
kube-system
7 days ago
The OSI made a huge mistake in choosing to use an non-trademarkable borrowed term as their own trade industry term. The original (and quite long standing) use to refer to publicly available texts is still widely used, and English isn't a prescriptive language outside of legal frameworks like trademark. This is why you really should pick a trademarkable name when you try to define trade marks.
HDThoreaun
7 days ago
open source means the source code is openly available. That is it. Phrases that have intuitive meaning need to stop being co-opted.
mrob
7 days ago
If that meaning is "intuitive", why was it not used before the Open Source Initiative introduced their definition? The competing uses are the ones co-opting an existing phrase.
foooorsyth
7 days ago
It’s perfectly intuitive to anyone with a brain. Never heard of OSI but they seem just about as pedantic, neurotic, and annoying with language as FSF.
Open source = I can view the source code. That’s what it means, that what it has always meant, and that what it will always mean. Simple as.
DataDaemon
7 days ago
Who cares about EU? They are destroying themselves.
Mistletoe
7 days ago
Ironically their policies are why I want to move there with my American dollars. I want to live somewhere that cares about my rights, not the rights of corporations.
CamperBob2
7 days ago
That's fine, but don't complain when you lose access to products and services that are widely available elsewhere.
In particular, restrictions on ML models will leave you without access to extremely powerful resources that are available to people in other countries, and to people in your own country who don't mind operating outside the law. Copyright maximalism is not, in fact, a good thing, and neither is overbearing nanny-statism. Both will ultimately disempower you.
bluefirebrand
7 days ago
You have to realize that as an individual, you have no power anyways
It doesn't matter if an individual personally has access to ML models, because government and/or huge corporations will ensure that individuals cannot use them for anything that would threaten government or corporate interests
This unfettered explosion of ML growth is disempowering all of us. Those with power are not using these tools to augment us, they are hoping to replace us.
CamperBob2
7 days ago
This unfettered explosion of ML growth is disempowering all of us.
Never mind that I've gotten things done with ChatGPT that would otherwise have taken much longer, or not gotten done at all. If this is what "disempowerment" feels like, bring it on.
Although the tech is nowhere near ready to make it happen, I would be very happy to be "replaced" by AI. I have better things to do than a robot's job. You probably do, too.
Mistletoe
7 days ago
Can you name some of these extremely powerful resources? I’m fine without access to AI hallucinations and poorly made images with six fingers.
CamperBob2
7 days ago
(Shrug) Among other capabilities, the ability to turn English into working code is a big deal. Perhaps you disagree, but if you do, it signals the presence of a gulf too large to cross in an HN thread.
Say what you want about ML models, they will get better at a rate that outpaces any possible self-improvement on your part. (Maybe you've noticed that those jokes about six-fingered people aren't aging particularly well.) The same is true for me, and I don't want to be left behind as that happens. At the national scope, countries that act to restrict or impede progress in this area will be outcompeted dramatically in the long run.
user
7 days ago
the5avage
7 days ago
Where would you go when you would live there (as a programmer interested in ai)? Just asking for a friend.
user
7 days ago