worstspotgain
13 hours ago
Excellent move by Newsom. We have a very active legislature, but it's been extremely bandwagon-y in recent years. I support much of Wiener's agenda, particularly his housing policy, but this bill was way off the mark.
It was basically a torpedo against open models. Market leaders like OpenAI and Anthropic weren't really worried about it, or about open models in general. Its supporters were the also-rans like Musk [1] trying to empty out the bottom of the pack, as well as those who are against any AI they cannot control, such as antagonists of the West and wary copyright holders.
[1] https://techcrunch.com/2024/08/26/elon-musk-unexpectedly-off...
dragonwriter
12 hours ago
> Excellent move by Newsom. [...] It was basically a torpedo against open models.
He vetoed it in part because the threshold it applies to at all are well-beyond any current models, and he wants something that will impose greater restrictions on more and much smaller/lower-training-compute models that this would have left alone entirely.
> Market leaders like OpenAI and Anthropic weren't really worried about it, or about open models in general.
OpenAI (along with Google and Meta) led the institutional opposition to the bill, Anthropic was a major advocate for it.
worstspotgain
12 hours ago
> He vetoed it in part because the threshold it applies to at all are well-beyond any current models, and he wants something that will impose greater restrictions on more and much smaller/lower-training-compute models that this would have left alone entirely.
Well, we'll see what passes again and when. By then there'll be more kittens out of the bag too.
> Anthropic was a major advocate for it.
I don't know about being a major advocate, the last I read was "cautious support" [1]. Perhaps Anthropic sees Llama as a bigger competitor of theirs than I do, but it could also just be PR.
[1] https://thejournal.com/articles/2024/08/26/anthropic-offers-...
FeepingCreature
2 hours ago
> I don't know about being a major advocate, the last I read was "cautious support" [1]. Perhaps Anthropic sees Llama as a bigger competitor of theirs than I do, but it could also just be PR.
This seems a curious dichotomy. Can we at least consider the possibility that they mean the words they say or is that off the table?
arduanika
7 hours ago
He's a politician, and his stated reason for the veto is not necessarily his real reason for the veto.
jodleif
6 hours ago
Makes perfect sense since his elected based on public positions
raverbashing
2 hours ago
Anthropic was championing a lot of FUD in the AI area
SonOfLilit
13 hours ago
why would Google, Microsoft and OpenAI oppose a torpedo against open models? Aren't they positioned to benefit the most?
benreesman
13 hours ago
Some laws are just bad. When the API-mediated/closed-weights companies agree with the open-weight/operator-aligned community that a law is bad, it’s probably got to be pretty awful. That said, though my mind might be playing tricks on me, I seem to recall the big labs being in favor at one time.
There are a number of related threads linked, but I’ll personally highlight Jeremy Howard’s open letter as IMHO the best-argued case against SB 1047.
stego-tech
7 hours ago
> When the API-mediated/closed-weights companies agree with the open-weight/operator-aligned community that a law is bad, it’s probably got to be pretty awful.
I’d be careful with that cognitive bias, because obviously companies dumping poison into water sources are going to be opposed to laws that would prohibit them from dumping poison into water sources.
Always consider the broader narrative in addition to the specific narratives of the players involved. Personally, I’m on the side of the fence that’s grumpy Newsom vetoed it, because it stymies the larger discussion about regulations on AI in general (not just LLMs) in the classic trap of “any law that isn’t absolutely perfect and addresses all known and unknown problems is automatically bad” often used to kill desperately needed reforms or regulations, regardless of industry. Instead of being able to build on the momentum of passed legislation and improve on it elsewhere, we now have to deal with the giant cudgel from the industry and its supporters of “even CA vetoed it so why are you still fighting against it?”
SonOfLilit
13 hours ago
> The definition of “covered model” within the bill is extremely broad, potentially encompassing a wide range of open-source models that pose minimal risk.
Who are these wide range of >$100mm open source models he's thinking of? And who are the impacted small businesses that would be scared to train them (at a cost of >$100mm) without paying for legal counsel?
shiroiushi
10 hours ago
It's too bad companies big and small didn't come together and successfully oppose the passage of the DMCA.
worstspotgain
9 hours ago
There were a lot of questionable Federal laws that made it through in the 90s, such as DOMA [1], PRWORA [2], IIRIRA [3], and perhaps the most maddening to me, DSHEA [4].
[1] https://en.wikipedia.org/wiki/Defense_of_Marriage_Act
[2] https://en.wikipedia.org/wiki/Personal_Responsibility_and_Wo...
[3] https://en.wikipedia.org/wiki/Illegal_Immigration_Reform_and...
[4] https://en.wikipedia.org/wiki/Dietary_Supplement_Health_and_...
shiroiushi
7 hours ago
"Questionable" is a very charitable term to use here, especially for the DSHEA which basically just legalizes snake-oil scams.
fshbbdssbbgdd
10 hours ago
My understanding is that tech was politically weaker back then. Although there were some big tech companies, they didn’t have as much of a lobbying operation.
wrs
9 hours ago
As I remember it, among other reasons, tech companies really wanted “multimedia” (at the time, that meant DVDs) to migrate to PCs (this was called the “living room PC”) and studios weren’t about to allow that without legal protection.
RockRobotRock
4 hours ago
No snark, but what's wrong with the DMCA? From what I understand it, they took the idea that it's infeasible for a site to take full liability for user-generated copyright infringement (so they granted them safe harbor), but that they will be liable if they ignore take down notices.
shiroiushi
3 hours ago
The biggest problem with it, AFAICT, is that it allows anyone who claims to hold a copyright to maliciously take down material they don't like by filing a DMCA notice. Companies receiving these notices have to follow a process to reinstate material that was falsely claimed, so many times they don't bother. There's no mechanism to punish companies that abuse this.
worstspotgain
3 hours ago
Among other things, quoth the EFF:
"Thanks to fair use, you have a legal right to use copyrighted material without permission or payment. But thanks to Section 1201, you do not have the right to break any digital locks that might prevent you from engaging in that fair use. And this, in turn, has had a host of unintended consequences, such as impeding the right to repair."
https://www.eff.org/deeplinks/2020/07/what-really-does-and-d...
RockRobotRock
3 hours ago
forgot about the anti-circumvention clause ;(((
that's the worst
CSMastermind
13 hours ago
The bill included language that required the creators of models to have various "safety" features that would severely restrict their development. It required audits and other regulatory hurdles to build the models at all.
llamaimperative
13 hours ago
If you spent $100MM+ on training.
gdiamos
13 hours ago
Advanced technology will drop the cost of training.
The flop targets in that bill would be like saying “640KB of memory is all we will ever need” and outlawing anything more.
Imagine what other countries would have done to us if we allowed a monopoly like that on memory in 1980.
llamaimperative
13 hours ago
No, there are two thresholds and BOTH must be met.
One of those is $100MM in training costs.
The other is measured in FLOPs but is already larger than GPT-4, so the “think of the small guys!” argument doesn’t make much sense.
gdiamos
11 hours ago
Cost as a perf metric is meaningless and the history of computer benchmarks has repeatedly proven this point.
There is a reason why we report time (speedup) in spec instead of $$
The price you pay depends on who you are and who is giving it to you.
llamaimperative
11 hours ago
That’s why there are two thresholds.
Vetch
10 hours ago
Cost per FLOP continues to drop on an exponential trend (and what bit flops do we mean?). Leaving aside more effective training methodologies and how that muddies everything by allowing superior to GPT4 perf using less training flops, it also means one of the thresholds soon will not make sense.
With the other threshold, it creates a disincentive for models like llama-405B+, in effect enshrining an even wider gap between open and closed.
pas
an hour ago
Why? Llama is not generated by some guy in a shed.
And even if it were, if said guy has such amount of compute, then it's time to use some of it to describe the model's safety profile.
If it makes sense for Meta to release models, it would have made sense even with the requirement. (After all the whole point of the proposed regulation is to get some better sense of those closed models.)
gdiamos
11 hours ago
Tell that to me when we get to llama 15
llamaimperative
11 hours ago
What?
gdiamos
11 hours ago
“But the big guys are struggling getting past 100KB, so ‘think of the small guys’ doesn’t make sense when the limit is 640KB.”
How do people on a computer technology forum ignore the 10,000x improvement in computers over 30 years due to advances in computer technology?
I could understand why politicians don’t get it.
I should think that computer systems companies would be up in arms over SB 1047 in the same way they would be if the government was thinking of putting a cap on hard drives bigger than 1 TB.
It puts a cap on flops. Isn’t the biggest company in the world in the business of selling flops?
llamaimperative
10 hours ago
It would be crazy if the bill had a built-in mechanism to regularly reassess both the cost and FLOP thresholds… which it does.
Inversely to your sarcastic “understanding” about politicians’ stupidity, I can’t understand how tech people seem incapable or unwilling to actually read the legislation they have such strong opinions about.
gdiamos
10 hours ago
Who are you llamaimperative, and what is your motivation to support SB 1047?
lucubratory
7 hours ago
It's troubling that you are saying things about the bill which are false, and then speculating on the motives of someone just pointing out that what you are saying is false.
gdiamos
5 hours ago
why not tell us?
Or point out what is actually false?
You rebutted the points that the flop limit will not actually limit anyone by saying that gpt4 is out of reach of startups.
Is OpenAI a startup? Is Anthropic? Is Grok? Is Perplexity? Is SSI?
You ignored the counter points that advanced technology exponentially raises flop limits and changes costs.
You said that the flop limit can be raised over time. So startups shouldn’t worry.
You ignored the counter point that flop limits in export controls are explicitly designed to limit competition from other nations.
Flop limits not being a real limit is a ridiculous argument. The intent of a flop limit is to limit, no matter how you sugar coat it.
lucubratory
4 hours ago
You have confused me with someone else.
gdiamos
10 hours ago
If your goal is to lift the limit, why put it in?
We periodically raise flop limits in export control law. The intention is still to limit China and Iran.
Would any computer industry accept a government mandated limit on perf?
Should NVIDIA accept a limit on flops?
Should Pure accept a limit on TBs?
Should Samsung accept a limit on HBM bandwidth?
Should Arista accept a limit on link bandwidth?
I don’t think that there is enough awareness that scaling laws tie intelligence to these HW metrics. Enforcing a cap on intelligence is the same thing as a cap on these metrics.
https://en.m.wikipedia.org/wiki/Neural_scaling_law
Has this legislation really thought through the implications of capping technology metrics, especially in a state where most of the GDP is driven by these metrics?
Clearly I’m biased because I am working on advancing these metrics. I’m doing it because I believe in the power of computing technology to improve the world (smartphones, self driving, automating data entry, biotech, scientific discovery, space, security, defense, etc, etc) as it has done historically. I also believe in the spirit of inventors and entrepreneurs to contribute and be rewarded for these advancements.
I would like to understand the biases of the supporters of this bill beyond a power grab by early movers.
Export control flop limits are designed to limit the access of technology to US allies.
I think it would be informative if the group of people trying to limit access of AI technology to themselves was brought into the light.
Who are they? Why do they think the people of the US and of CA should grant that power to them?
pj_mukh
9 hours ago
Or used a model someone open sourced after spending $100M+ on its training?
Like if I’m a startup reliant on open-source models I realize I don’t need liability and extra safety precautions but I didn’t hear any guarantees that this wouldn't turn off Meta from releasing their models to me if my business was in California?
I never heard any clarifications from the Pro groups about that
wslh
11 hours ago
All that means that the barriers for entry for startups skyrocket.
SonOfLilit
11 hours ago
Startups that spend >$100mm on one training run...
wslh
11 hours ago
There are startups and startups, the ones that you read on media are just a fraction of the worldwide reality.
worstspotgain
13 hours ago
If there was just one quasi-monopoly it would have probably supported the bill. As it is, the market leaders have the competition from each other to worry about. Getting rid of open models wouldn't let them raise their prices much.
SonOfLilit
13 hours ago
So if it's not them, who is the hidden commercial interest sponsoring an attack on open source models that cost >$100mm to train? Or does Wiener just genuinely hate megabudget open source? Or is it an accidental attack, aimed at something else? At what?
worstspotgain
13 hours ago
Like I said, supporters included wary copyright holders and the bottom-market also-rans like Musk. If your model is barely holding up against Llama, what's the point of staying in.
SonOfLilit
13 hours ago
And two of the three godfathers of AI, and all of the AI notkillaboutism crowd.
Actually, wait, if Grok is losing to GPT, why would Musk care about Llama more than Altman? Llama hurts his competitor...
worstspotgain
12 hours ago
The market in my argument looks like OpenAI ~ Anthropic > Google >>> Meta (~ or maybe >) Musk/Alibaba. The top 3 aren't worried about the down-market stuff. You're free to disagree of course.
gdiamos
9 hours ago
Claude, SSI, Grok, GPT, Llama, …
Should we crown one the king?
Or perhaps it is better to let them compete?
Perhaps advanced AI capability will motivate advanced AI safety capability?
fat_cantor
3 hours ago
It's an interesting thought that as AI advances, and becomes more capable of human destruction, programmers, bots and politicians will work together to create safety for a large quantity of humans
fifilura
2 hours ago
"AI defense corps"
Maken
4 hours ago
What are the economic incentives for AI safety?
wrsh07
13 hours ago
I would note that Facebook and Google were opposed to eg gdpr although it gave them a larger share of the pie.
When framed like that: why be opposed, it hurts your competition? The answer is something like: it shrinks the pie or reduces the growth rate, and that's bad (for them and others)
The economics of this bill aren't clear to me (how large of a fine would Google/Microsoft pay in expectation within the next ten years?), but they maybe also aren't clear to Google/Microsoft (and that alone could be a reason to oppose)
Many of the ai safety crowd were very supportive, and I would recommend reading Zvi's writing on it if you want their take
hn_throwaway_99
13 hours ago
Yeah, I think the argument that "this just hurts open models" makes no sense given the supporters/detractors of this bill.
The thing that large companies care the most about in the legal realm is certainty. They're obviously going to be a big target of lawsuits regardless, so they want to know that legislation is clear as to the ways they can act - their biggest fear is that you get a good "emotional sob story" in front of a court with a sympathetic jury. It sounded like this legislation was so vague that it would attract a hoard of lawyers looking for a way they can argue these big companies didn't take "reasonable" care.
SonOfLilit
13 hours ago
Sob stories are definitely not covered by the text of the bill. The "critical harm" clause (ctrl-f this comment section for a full quote) is all about nuclear weapons and massive hacks and explicitly excludes "just" someone dying or getting injured with very clear language.
Cupertino95014
13 hours ago
> We have a very active legislature, but it's been extremely bandwagon-y in recent years
"It's been a clown show."
There. Fixed it for you.
arduanika
12 hours ago
Come on, we're trying to have a productive discussion here. There's no need to just drop in and insult clowns.
labster
10 hours ago
To be fair, clowning around is a lot more tractable than homelessness, housing prices, health care, or immigration.
Cupertino95014
10 hours ago
Hear.
Just keep getting reelected, since no one expects you to accomplish anything. People in the rest of the country push "term limits" as the solution to everything. I always point out that we've had them in CA for 20 years. It just means that they run for a different office after they're termed out.
Or become lobbyists.
labster
9 hours ago
We should do the same thing in software engineering. After 4 years in web dev, you have to switch to something else like embedded systems or DBA. Or be forced to become a highly paid consultant.
dgellow
3 hours ago
> After 4 years in web dev, you have to switch to something else like embedded systems or DBA
Unironically, that would be awesome