Gavin Newsom vetoes SB 1047

528 pointsposted 15 hours ago
by atlasunshrugged

296 Comments

worstspotgain

13 hours ago

Excellent move by Newsom. We have a very active legislature, but it's been extremely bandwagon-y in recent years. I support much of Wiener's agenda, particularly his housing policy, but this bill was way off the mark.

It was basically a torpedo against open models. Market leaders like OpenAI and Anthropic weren't really worried about it, or about open models in general. Its supporters were the also-rans like Musk [1] trying to empty out the bottom of the pack, as well as those who are against any AI they cannot control, such as antagonists of the West and wary copyright holders.

[1] https://techcrunch.com/2024/08/26/elon-musk-unexpectedly-off...

dragonwriter

12 hours ago

> Excellent move by Newsom. [...] It was basically a torpedo against open models.

He vetoed it in part because the threshold it applies to at all are well-beyond any current models, and he wants something that will impose greater restrictions on more and much smaller/lower-training-compute models that this would have left alone entirely.

> Market leaders like OpenAI and Anthropic weren't really worried about it, or about open models in general.

OpenAI (along with Google and Meta) led the institutional opposition to the bill, Anthropic was a major advocate for it.

worstspotgain

12 hours ago

> He vetoed it in part because the threshold it applies to at all are well-beyond any current models, and he wants something that will impose greater restrictions on more and much smaller/lower-training-compute models that this would have left alone entirely.

Well, we'll see what passes again and when. By then there'll be more kittens out of the bag too.

> Anthropic was a major advocate for it.

I don't know about being a major advocate, the last I read was "cautious support" [1]. Perhaps Anthropic sees Llama as a bigger competitor of theirs than I do, but it could also just be PR.

[1] https://thejournal.com/articles/2024/08/26/anthropic-offers-...

FeepingCreature

2 hours ago

> I don't know about being a major advocate, the last I read was "cautious support" [1]. Perhaps Anthropic sees Llama as a bigger competitor of theirs than I do, but it could also just be PR.

This seems a curious dichotomy. Can we at least consider the possibility that they mean the words they say or is that off the table?

arduanika

7 hours ago

He's a politician, and his stated reason for the veto is not necessarily his real reason for the veto.

jodleif

6 hours ago

Makes perfect sense since his elected based on public positions

raverbashing

2 hours ago

Anthropic was championing a lot of FUD in the AI area

SonOfLilit

13 hours ago

why would Google, Microsoft and OpenAI oppose a torpedo against open models? Aren't they positioned to benefit the most?

benreesman

13 hours ago

Some laws are just bad. When the API-mediated/closed-weights companies agree with the open-weight/operator-aligned community that a law is bad, it’s probably got to be pretty awful. That said, though my mind might be playing tricks on me, I seem to recall the big labs being in favor at one time.

There are a number of related threads linked, but I’ll personally highlight Jeremy Howard’s open letter as IMHO the best-argued case against SB 1047.

https://www.answer.ai/posts/2024-04-29-sb1047.html

stego-tech

7 hours ago

> When the API-mediated/closed-weights companies agree with the open-weight/operator-aligned community that a law is bad, it’s probably got to be pretty awful.

I’d be careful with that cognitive bias, because obviously companies dumping poison into water sources are going to be opposed to laws that would prohibit them from dumping poison into water sources.

Always consider the broader narrative in addition to the specific narratives of the players involved. Personally, I’m on the side of the fence that’s grumpy Newsom vetoed it, because it stymies the larger discussion about regulations on AI in general (not just LLMs) in the classic trap of “any law that isn’t absolutely perfect and addresses all known and unknown problems is automatically bad” often used to kill desperately needed reforms or regulations, regardless of industry. Instead of being able to build on the momentum of passed legislation and improve on it elsewhere, we now have to deal with the giant cudgel from the industry and its supporters of “even CA vetoed it so why are you still fighting against it?”

SonOfLilit

13 hours ago

> The definition of “covered model” within the bill is extremely broad, potentially encompassing a wide range of open-source models that pose minimal risk.

Who are these wide range of >$100mm open source models he's thinking of? And who are the impacted small businesses that would be scared to train them (at a cost of >$100mm) without paying for legal counsel?

shiroiushi

10 hours ago

It's too bad companies big and small didn't come together and successfully oppose the passage of the DMCA.

worstspotgain

9 hours ago

There were a lot of questionable Federal laws that made it through in the 90s, such as DOMA [1], PRWORA [2], IIRIRA [3], and perhaps the most maddening to me, DSHEA [4].

[1] https://en.wikipedia.org/wiki/Defense_of_Marriage_Act

[2] https://en.wikipedia.org/wiki/Personal_Responsibility_and_Wo...

[3] https://en.wikipedia.org/wiki/Illegal_Immigration_Reform_and...

[4] https://en.wikipedia.org/wiki/Dietary_Supplement_Health_and_...

shiroiushi

7 hours ago

"Questionable" is a very charitable term to use here, especially for the DSHEA which basically just legalizes snake-oil scams.

fshbbdssbbgdd

10 hours ago

My understanding is that tech was politically weaker back then. Although there were some big tech companies, they didn’t have as much of a lobbying operation.

wrs

9 hours ago

As I remember it, among other reasons, tech companies really wanted “multimedia” (at the time, that meant DVDs) to migrate to PCs (this was called the “living room PC”) and studios weren’t about to allow that without legal protection.

RockRobotRock

4 hours ago

No snark, but what's wrong with the DMCA? From what I understand it, they took the idea that it's infeasible for a site to take full liability for user-generated copyright infringement (so they granted them safe harbor), but that they will be liable if they ignore take down notices.

shiroiushi

3 hours ago

The biggest problem with it, AFAICT, is that it allows anyone who claims to hold a copyright to maliciously take down material they don't like by filing a DMCA notice. Companies receiving these notices have to follow a process to reinstate material that was falsely claimed, so many times they don't bother. There's no mechanism to punish companies that abuse this.

worstspotgain

3 hours ago

Among other things, quoth the EFF:

"Thanks to fair use, you have a legal right to use copyrighted material without permission or payment. But thanks to Section 1201, you do not have the right to break any digital locks that might prevent you from engaging in that fair use. And this, in turn, has had a host of unintended consequences, such as impeding the right to repair."

https://www.eff.org/deeplinks/2020/07/what-really-does-and-d...

RockRobotRock

3 hours ago

forgot about the anti-circumvention clause ;(((

that's the worst

CSMastermind

13 hours ago

The bill included language that required the creators of models to have various "safety" features that would severely restrict their development. It required audits and other regulatory hurdles to build the models at all.

llamaimperative

13 hours ago

If you spent $100MM+ on training.

gdiamos

13 hours ago

Advanced technology will drop the cost of training.

The flop targets in that bill would be like saying “640KB of memory is all we will ever need” and outlawing anything more.

Imagine what other countries would have done to us if we allowed a monopoly like that on memory in 1980.

llamaimperative

13 hours ago

No, there are two thresholds and BOTH must be met.

One of those is $100MM in training costs.

The other is measured in FLOPs but is already larger than GPT-4, so the “think of the small guys!” argument doesn’t make much sense.

gdiamos

11 hours ago

Cost as a perf metric is meaningless and the history of computer benchmarks has repeatedly proven this point.

There is a reason why we report time (speedup) in spec instead of $$

The price you pay depends on who you are and who is giving it to you.

llamaimperative

11 hours ago

That’s why there are two thresholds.

Vetch

10 hours ago

Cost per FLOP continues to drop on an exponential trend (and what bit flops do we mean?). Leaving aside more effective training methodologies and how that muddies everything by allowing superior to GPT4 perf using less training flops, it also means one of the thresholds soon will not make sense.

With the other threshold, it creates a disincentive for models like llama-405B+, in effect enshrining an even wider gap between open and closed.

pas

an hour ago

Why? Llama is not generated by some guy in a shed.

And even if it were, if said guy has such amount of compute, then it's time to use some of it to describe the model's safety profile.

If it makes sense for Meta to release models, it would have made sense even with the requirement. (After all the whole point of the proposed regulation is to get some better sense of those closed models.)

gdiamos

11 hours ago

Tell that to me when we get to llama 15

llamaimperative

11 hours ago

What?

gdiamos

11 hours ago

“But the big guys are struggling getting past 100KB, so ‘think of the small guys’ doesn’t make sense when the limit is 640KB.”

How do people on a computer technology forum ignore the 10,000x improvement in computers over 30 years due to advances in computer technology?

I could understand why politicians don’t get it.

I should think that computer systems companies would be up in arms over SB 1047 in the same way they would be if the government was thinking of putting a cap on hard drives bigger than 1 TB.

It puts a cap on flops. Isn’t the biggest company in the world in the business of selling flops?

llamaimperative

10 hours ago

It would be crazy if the bill had a built-in mechanism to regularly reassess both the cost and FLOP thresholds… which it does.

Inversely to your sarcastic “understanding” about politicians’ stupidity, I can’t understand how tech people seem incapable or unwilling to actually read the legislation they have such strong opinions about.

gdiamos

10 hours ago

Who are you llamaimperative, and what is your motivation to support SB 1047?

lucubratory

7 hours ago

It's troubling that you are saying things about the bill which are false, and then speculating on the motives of someone just pointing out that what you are saying is false.

gdiamos

5 hours ago

why not tell us?

Or point out what is actually false?

You rebutted the points that the flop limit will not actually limit anyone by saying that gpt4 is out of reach of startups.

Is OpenAI a startup? Is Anthropic? Is Grok? Is Perplexity? Is SSI?

You ignored the counter points that advanced technology exponentially raises flop limits and changes costs.

You said that the flop limit can be raised over time. So startups shouldn’t worry.

You ignored the counter point that flop limits in export controls are explicitly designed to limit competition from other nations.

Flop limits not being a real limit is a ridiculous argument. The intent of a flop limit is to limit, no matter how you sugar coat it.

lucubratory

4 hours ago

You have confused me with someone else.

gdiamos

10 hours ago

If your goal is to lift the limit, why put it in?

We periodically raise flop limits in export control law. The intention is still to limit China and Iran.

Would any computer industry accept a government mandated limit on perf?

Should NVIDIA accept a limit on flops?

Should Pure accept a limit on TBs?

Should Samsung accept a limit on HBM bandwidth?

Should Arista accept a limit on link bandwidth?

I don’t think that there is enough awareness that scaling laws tie intelligence to these HW metrics. Enforcing a cap on intelligence is the same thing as a cap on these metrics.

https://en.m.wikipedia.org/wiki/Neural_scaling_law

Has this legislation really thought through the implications of capping technology metrics, especially in a state where most of the GDP is driven by these metrics?

Clearly I’m biased because I am working on advancing these metrics. I’m doing it because I believe in the power of computing technology to improve the world (smartphones, self driving, automating data entry, biotech, scientific discovery, space, security, defense, etc, etc) as it has done historically. I also believe in the spirit of inventors and entrepreneurs to contribute and be rewarded for these advancements.

I would like to understand the biases of the supporters of this bill beyond a power grab by early movers.

Export control flop limits are designed to limit the access of technology to US allies.

I think it would be informative if the group of people trying to limit access of AI technology to themselves was brought into the light.

Who are they? Why do they think the people of the US and of CA should grant that power to them?

pj_mukh

9 hours ago

Or used a model someone open sourced after spending $100M+ on its training?

Like if I’m a startup reliant on open-source models I realize I don’t need liability and extra safety precautions but I didn’t hear any guarantees that this wouldn't turn off Meta from releasing their models to me if my business was in California?

I never heard any clarifications from the Pro groups about that

wslh

11 hours ago

All that means that the barriers for entry for startups skyrocket.

SonOfLilit

11 hours ago

Startups that spend >$100mm on one training run...

wslh

11 hours ago

There are startups and startups, the ones that you read on media are just a fraction of the worldwide reality.

worstspotgain

13 hours ago

If there was just one quasi-monopoly it would have probably supported the bill. As it is, the market leaders have the competition from each other to worry about. Getting rid of open models wouldn't let them raise their prices much.

SonOfLilit

13 hours ago

So if it's not them, who is the hidden commercial interest sponsoring an attack on open source models that cost >$100mm to train? Or does Wiener just genuinely hate megabudget open source? Or is it an accidental attack, aimed at something else? At what?

worstspotgain

13 hours ago

Like I said, supporters included wary copyright holders and the bottom-market also-rans like Musk. If your model is barely holding up against Llama, what's the point of staying in.

SonOfLilit

13 hours ago

And two of the three godfathers of AI, and all of the AI notkillaboutism crowd.

Actually, wait, if Grok is losing to GPT, why would Musk care about Llama more than Altman? Llama hurts his competitor...

worstspotgain

12 hours ago

The market in my argument looks like OpenAI ~ Anthropic > Google >>> Meta (~ or maybe >) Musk/Alibaba. The top 3 aren't worried about the down-market stuff. You're free to disagree of course.

gdiamos

9 hours ago

Claude, SSI, Grok, GPT, Llama, …

Should we crown one the king?

Or perhaps it is better to let them compete?

Perhaps advanced AI capability will motivate advanced AI safety capability?

fat_cantor

3 hours ago

It's an interesting thought that as AI advances, and becomes more capable of human destruction, programmers, bots and politicians will work together to create safety for a large quantity of humans

Maken

4 hours ago

What are the economic incentives for AI safety?

wrsh07

13 hours ago

I would note that Facebook and Google were opposed to eg gdpr although it gave them a larger share of the pie.

When framed like that: why be opposed, it hurts your competition? The answer is something like: it shrinks the pie or reduces the growth rate, and that's bad (for them and others)

The economics of this bill aren't clear to me (how large of a fine would Google/Microsoft pay in expectation within the next ten years?), but they maybe also aren't clear to Google/Microsoft (and that alone could be a reason to oppose)

Many of the ai safety crowd were very supportive, and I would recommend reading Zvi's writing on it if you want their take

hn_throwaway_99

13 hours ago

Yeah, I think the argument that "this just hurts open models" makes no sense given the supporters/detractors of this bill.

The thing that large companies care the most about in the legal realm is certainty. They're obviously going to be a big target of lawsuits regardless, so they want to know that legislation is clear as to the ways they can act - their biggest fear is that you get a good "emotional sob story" in front of a court with a sympathetic jury. It sounded like this legislation was so vague that it would attract a hoard of lawyers looking for a way they can argue these big companies didn't take "reasonable" care.

SonOfLilit

13 hours ago

Sob stories are definitely not covered by the text of the bill. The "critical harm" clause (ctrl-f this comment section for a full quote) is all about nuclear weapons and massive hacks and explicitly excludes "just" someone dying or getting injured with very clear language.

Cupertino95014

13 hours ago

> We have a very active legislature, but it's been extremely bandwagon-y in recent years

"It's been a clown show."

There. Fixed it for you.

arduanika

12 hours ago

Come on, we're trying to have a productive discussion here. There's no need to just drop in and insult clowns.

labster

10 hours ago

To be fair, clowning around is a lot more tractable than homelessness, housing prices, health care, or immigration.

Cupertino95014

10 hours ago

Hear.

Just keep getting reelected, since no one expects you to accomplish anything. People in the rest of the country push "term limits" as the solution to everything. I always point out that we've had them in CA for 20 years. It just means that they run for a different office after they're termed out.

Or become lobbyists.

labster

9 hours ago

We should do the same thing in software engineering. After 4 years in web dev, you have to switch to something else like embedded systems or DBA. Or be forced to become a highly paid consultant.

dgellow

3 hours ago

> After 4 years in web dev, you have to switch to something else like embedded systems or DBA

Unironically, that would be awesome

tbrownaw

14 hours ago

https://legiscan.com/CA/text/SB1047/id/3019694

So this is the one that would make it illegal to provide open weights for models past a certain size, would make it illegal to sell enough compute power to train such a model without first verifying that your customer isn't going to train a model and then ignore this law, and mandates audit requirements to prove that your models won't help people cause disasters and can be turned off.

akira2501

14 hours ago

> and mandates audit requirements to prove that your models won't help people cause disasters

Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

> and can be turned off.

I really wish legislators would operate inside reality instead of a Star Trek episode.

teekert

3 hours ago

The best episodes are where the model can't be turned off anymore ;)

trog

13 hours ago

> Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

Uh, aren't potential risk factors things you want to consider when planning for the future?

whimsicalism

14 hours ago

This snide dismissiveness around “sci-fi” scenarios, while capabilities continue to grow, seems incredibly naïve and foolish.

Many of you saying stuff like this were the same naysayers who have been terribly wrong about scaling for the last 6-8 years or people who only started paying attention in the last two years.

zamadatix

13 hours ago

I don't think GP is dismissing the scenarios themselves, rather espousing their belief these answers will do nothing to prevent said scenarios from eventually occuring anyways. It's like if we invented nukes but found out they were made out of having a lot of telephones instead of something exotic like refining radioactive elements a certain way. Sure - you can still try to restrict telephone sales... but one way or another lots of nukes are going to be built around the world (power plants too) and, in the meantime, what you've regulated away is the convenience of having a better phone from the average person as time goes on.

The same battle was/is had around cryptography - telling people they can't use or distribute cryptography algorithms on consumer hardware never stopped bad people from having real time functionally unbreakable encryption.

The safety plan must be around somehow handling the resulting problems when they happen, not hoping to make it never occur even once for the rest of time. Eventually a bad guy is going to make an indecipherable call, eventually an enemy country or rogue operator is going to nuke a place, eventually an AI is going to ${scifi_ai_thing}. The safety of all society can't rest on audits and good intention preventing those from ever happening.

marshray

13 hours ago

It's an interesting analogy.

Nukes are a far more primitive technology (i.e., enrichment requires only more basic industrial capabilities) than AI hardware, yet they are probably the best example of tech limitations via international agreements.

But the algorithms are mostly public knowledge, datacenters are no secret, and the chips aren't even made in the US. I don't see what leverage California has to regulate AI broadly.

So it seems like the only thing such a bill would achieve is to incentivize AI research to avoid California.

tbrownaw

13 hours ago

> Nukes are a far more primitive technology (i.e., enrichment requires only more basic industrial capabilities) than AI hardware, yet they are probably the best example of tech limitations via international agreements.

And direct sabotage, eg Stuxnet.

And outright assassination eg https://www.bbc.com/news/world-middle-east-55128970

derektank

13 hours ago

>So it seems like the only thing such a bill would achieve is to incentivize AI research to avoid California.

Which, incidentally, would be pretty bad from a climate change perspective since many of the alternative locations for datacenters have a worse mix of renewables/nuclear to fossil fuels in their electricity generation. ~60% of VA's electricity is generated from burning fossil fuels (of which 1/12th is still coal) while natural gas makes up less than 40% of electricity generation in California, for example

marshray

13 hours ago

Electric power crosses state lines, very little loss.

It's looking like cooling water may be more of a limiting factor. Yet, even this can be greatly reduced when electric power is cheap enough.

Solar power is already "cheaper than free" in many places and times. If the initial winner-take-all training race ever slows down, perhaps training can be scheduled for energy cost-optimal times and places.

derektank

12 hours ago

Transmission losses aren't negligible without investment in costly infrastructure like HVDC connections. It's always more efficient to site electricity generation as close to generation as feasibly possible.

marshray

11 hours ago

Electric power transmission loss is less than 5%:

https://www.eia.gov/totalenergy/data/flow-graphs/electricity...

   14.26 Net generation
   0.67 "Transmission and delivery losses and unaccounted for"
It's just a tiny fraction of the losses resulting from burning fuel to heat water to produce steam to drive a turbine to yield electric power.

bunabhucan

8 hours ago

That's the average. It's bought and sold on a spot market. If you try to sell CA power in AZ and the losses are 10% then SRP or TEP or whoever can undercut your price with local power/lower losses.

marshray

6 hours ago

I just don't see 10% remaining a big deal while solar continues its exponential cost reduction. Solar does not consume fuel, so when local supply exceeds local demand the cost of incremental production drops to approximately zero. Nobody's undercutting zero, even with 10% losses.

IMO, this is what 'winning' looks like.

hannasm

8 hours ago

If you think a solution to bad behavior is a law declaring punishment for such behavior you are a fool.

nradov

13 hours ago

That's a total non sequitur. Just because LLMs are scalable doesn't mean this is a problem that requires government intervention. It's only idiots and grifters who want us to worry about sci-fi disaster scenarios. The snide dismissiveness is completely deserved.

akira2501

14 hours ago

> seems incredibly naïve and foolish.

We have electrical codes. These require disconnects just about everywhere. The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

> were the same naysayers

Now who's being snide and dismissive? Do you want to argue the point or are you just interested in tossing ad hominem attacks around?

yarg

12 hours ago

Someone never watched the Terminator series.

In all seriousness, if we ever get to the point where an AI needs to be shut down to avoid catastrophe, there's probably no way to turn it off.

There are digital controls for damned near everything, and security is universally disturbingly bad.

Whatever you're trying to stop will already have root-kitted your systems (and quite possibly have replicated) by the time you realise that it's even beginning to become a problem.

You could only shut it down if there's a choke point accessible without electronic intervention, and you'd need to reach it without electronic intervention, and do so without communicating your intent.

Yes, that's all highly highly improbable - but you seem to believe that you can just turn off the Genie, when he's already seen you coming and is having none of it.

theptip

5 hours ago

If a malicious model exhilarates its weights to a Chinese datacenter, how do you turn that off?

How do you turn off Llama-Omega if it turns out that it can be prompt-hacked into a malicious agent?

whimsicalism

14 hours ago

> We have electrical codes. These require disconnects just about everywhere. The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

Not so clear when you are inferencing a distributed model across the globe. Doesn't seem obvious that shutdown of a distributed computing environment will always be trivial.

> Now who's being snide and dismissive?

Oh to be clear, nothing against being dismissive - just the particular brand of dismissiveness of 'scifi' safety scenarios is naive.

marshray

13 hours ago

> The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

Does anyone remember Sen. Lieberman's "Internet Kill Switch" bill?

Loughla

13 hours ago

>I really wish legislators would operate inside reality instead of a Star Trek episode.

What are your thoughts about businesses like Google and Meta providing guidance and assistance to legislators?

akira2501

12 hours ago

If it happens in a public and open session of the legislature with multiple other sources of guidance and information available then that's how it's supposed to work.

I suspect this is not how the majority of "guidance" is actually being offered. I also guess this is probably a really good way to find new sources of campaign "donations." It's also a really good way for monopolistic players to keep a strangle hold on a nascent market.

lopatin

13 hours ago

> Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

What if it audits your deploy and approval processes? They can say for example, that if your AI deployment process doesn't include stress tests against some specific malicious behavior (insert test cases here) then you are in violation of the law. That would essentially be a control on all future deploys.

comp_throw7

14 hours ago

> this is the one that would make it illegal to provide open weights for models past a certain size

That's nowhere in the bill, but plenty of people have been confused into thinking this by the bill's opponents.

tbrownaw

13 hours ago

Three of the four options of what an "artifical intelligence safety incident" is defined as require that the weights be kept secret. One is quite explicit, the others are just impossible to prevent if the weights are available:

> (2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model or covered model derivative.

> (3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model or covered model derivative.

> (4) Unauthorized use of a covered model or covered model derivative to cause or materially enable critical harm.

comp_throw7

7 hours ago

It is not illegal for a model developer to train a model that is involved in an "artifical intelligence safety incident".

Terr_

13 hours ago

Sounds like legislation that mis-indentifies the root issue as "somehow maybe the computer is too smart" as opposed to, say, "humans and corporations should be liable for using the tool to do evil."

concordDance

4 hours ago

The former is a potentially extremely serious issue, just not one we're likely to hit in the very near future.

raxxorraxor

2 hours ago

That is a very bad law. People and especially corporations in favor of it should be under scrutiny for trying to corner a market for themselves.

timr

14 hours ago

The proposed law was so egregiously stupid that if you live in California, you should seriously consider voting for Anthony Weiner's opponent in the next election.

The man cannot be trusted with power -- this is far from the first ridiculous law he has championed. Notably, he was behind the (blatantly unconstitutional) AB2098, which was silently repealed by the CA state legislature before it could be struck down by the courts:

https://finance.yahoo.com/news/ncla-victory-gov-newsom-repea...

https://www.sfchronicle.com/opinion/openforum/article/COVID-...

(Folks, this isn't a partisan issue. Weiner has a long history of horrendously bad judgment and self-aggrandizement via legislation. I don't care which side of the political spectrum you are on, or what you think of "AI safety", you should want more thoughtful representation than this.)

GolfPopper

13 hours ago

Anthony Weiner is a disgraced New York Democratic politician who does not appear to have re-entered politics after his release from prison a few years ago. You mentioned his name twice in your post, so it doesn't seem to be an accident that you mentioned him, yet his name does not seem to appear anywhere in your links. I have no idea what message you're trying to convey, but whatever it is, I think you're failing to communicate it.

hn_throwaway_99

13 hours ago

He meant Scott Wiener but had penis on the brain.

timr

11 hours ago

Yes, it was a mistake. I obviously meant the Weiner responsible for the legislation I cited. But you clearly know that.

> I have no idea what message you're trying to convey, but whatever it is, I think you're failing to communicate it.

Really? The message is unchanged, so it seems like something you could deduce.

johnnyanmac

13 hours ago

>you should want more thoughtful representation than this.

Your opinion on what "thoughtful representation" is is what makes this point partisan. Regardless, he's in until 2028 so it'll be some time before that vote can happen.

Also, important Nitpick, it's Scott Weiner. Anthony Weiner (no relation AFAIK) was in New York and has a much more... Public controversy.

Terr_

13 hours ago

> Public controversy

I think you accidentally hit the letter "L". :P

dlx

13 hours ago

you've got the wrong Weiner dude ;)

hn_throwaway_99

13 hours ago

Lol, I thought "How TF did Anthony Weiner get elected for anything else again??" after reading that.

dang

14 hours ago

Related. Others?

OpenAI, Anthropic, Google employees support California AI bill - https://news.ycombinator.com/item?id=41540771 - Sept 2024 (26 comments)

Y Combinator, AI startups oppose California AI safety bill - https://news.ycombinator.com/item?id=40780036 - June 2024 (8 comments)

California AI bill becomes a lightning rod–for safety advocates and devs alike - https://news.ycombinator.com/item?id=40767627 - June 2024 (2 comments)

California Senate Passes SB 1047 - https://news.ycombinator.com/item?id=40515465 - May 2024 (42 comments)

California residents: call your legislators about AI bill SB 1047 - https://news.ycombinator.com/item?id=40421986 - May 2024 (11 comments)

Misconceptions about SB 1047 - https://news.ycombinator.com/item?id=40291577 - May 2024 (35 comments)

California Senate bill to crush OpenAI competitors fast tracked for a vote - https://news.ycombinator.com/item?id=40200971 - April 2024 (16 comments)

SB-1047 will stifle open-source AI and decrease safety - https://news.ycombinator.com/item?id=40198766 - April 2024 (190 comments)

Call-to-Action on SB 1047 – Frontier Artificial Intelligence Models Act - https://news.ycombinator.com/item?id=40192204 - April 2024 (103 comments)

On the Proposed California SB 1047 - https://news.ycombinator.com/item?id=39347961 - Feb 2024 (115 comments)

SonOfLilit

13 hours ago

I wondered if the article was over-dramatizing what risks were covered by the bill, so I read the text:

(g) (1) “Critical harm” means any of the following harms caused or materially enabled by a covered model or covered model derivative:

(A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.

(B) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks on critical infrastructure by a model conducting, or providing precise instructions for conducting, a cyberattack or series of cyberattacks on critical infrastructure.

(C) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from an artificial intelligence model engaging in conduct that does both of the following:

(i) Acts with limited human oversight, intervention, or supervision.

(ii) Results in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.

(D) Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive.

(2) “Critical harm” does not include any of the following:

(A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.

(B) Harms caused or materially enabled by a covered model combined with other software, including other models, if the covered model did not materially contribute to the other software’s ability to cause or materially enable the harm.

(C) Harms that are not caused or materially enabled by the developer’s creation, storage, use, or release of a covered model or covered model derivative.

handfuloflight

13 hours ago

Does Newsom believe that an AI model can do this damage autonomously or does he understand it must be wielded and overseen by humans to do so?

In that case, how much of an enabler is an AI to meet the destructive ends, when, if the humans can use AI to conduct the damage, they can surely do it without the AI as well.

The potential for destruction exists either way but is the concern that AI makes this more accessible and effective? What's the boogeyman? I don't think these models have private information regarding infrastructure and systems that could be exploited.

SonOfLilit

13 hours ago

“Critical harm” does not include any of the following: (A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.

The bogeyman is not these models, it's future agentic autonomous ones, if and when they can hack major infrastructure or build nukes. The quoted text is very very clear on that.

caseyy

9 hours ago

I am not convinced the text means what you say it means.

All knowledge (publicly available and not) and all tools (AI or not) can be used by people in material ways to commit the aforementioned atrocities, but only the models producing novel knowledge would be at risk. I hope you can see how this law would stifle AI advancement. The boundary between what's acceptable and not would be drawn at generating novel, publicly unavailable information; not at information that could be used to harm - because all information can be used that way.

What if AI solves fusion and countries around the world start building fusion weapons of mass destruction? What if it solves personalized gene therapy and armed forces worldwide develop weapons that selectively kill their wartime foes? Should we not have split the atom just because that power was co-opted for evil means, or should we have not done the contraception research just because the third Reich used it for sterilization in their war crimes? This bill would work towards making AI never invent any of the novel things, simply out of fear that they will be corrupted by people as they have been in history. It would only slow research and whenever the (slower) research makes its discoveries, they would still get corrupted. In other words, there would be no change in human propensity to hurt others with knowledge, simply less knowledge.

Besides, the text is not "very very clear" on AI if and when it hacks major infrastructure or builds nukes. If it was "very very clear" on that, that is what it would say :) - "an AI model is prohibited to be the decision-making agent, solely instigating critical harm to humans". But what the text says is different.

I agree that AI harms to people and humanity need to be minimized but this bill is a miss rather than a hit and the veto is good news. We know AI alignment is needed. Other bills will come.

bunabhucan

8 hours ago

I'm pretty sure there's a few hundred fusion wmds in silos a few hours north of me, we've had this kind of weapon since 1952.

caseyy

7 hours ago

Nice fact check, thank you. I didn’t know H-bombs used fusion but it makes complete sense. Hydrogen is not exactly the heaviest of elements :)

Well then, for my example, imagine a different future discovery that could be abused. Let’s say biological robots or a some new source of useful energy that is misused. Warring humans find ways to corrupt many scientific achievements for evil.

anigbrowl

11 hours ago

Newsom is the governor who vetoed the bill, not the lawmaker who authored it.

concordDance

4 hours ago

> Does Newsom believe that an AI model can do this damage autonomously or does he understand it must be wielded and overseen by humans to do so?

AI models might not be able to, but an AI system that uses a powerful model might be able to cause damage (including extinction of humanity in the more distant future) unintended and unforeseen by its creators.

The more complex and unpredictable the system the harder it is to properly oversee.

dang

10 hours ago

(I added newlines to your quote to match what looked like the intended formatting - I hope that's ok. Since HN doesn't do indentation I'm not sure it helps all that much...)

ketzo

9 hours ago

I’m sure people have asked this before, but would HN ever add a little more rich-text? Even just bullet points and indents might be nice.

dang

8 hours ago

Maybe. I'm paranoid about the unintended cost of improvements, but it's not an absolute position.

slater

8 hours ago

And maybe also make new lines in the comment box translate to new lines in the resulting comment...? :D

dang

8 hours ago

That's actually a really good point. I've never looked into that, I just took it for granted that to get a line break on HN you need two consecutive newline chars.

I guess the thing to do would be to look at all (well, lots of) comments that have single newlines and see what would break if they were rendered as actual newlines.

Matheus28

7 hours ago

Could be applied to all comments made after a certain future date. That way nothing in the past is poorly formatted

badsandwitch

5 hours ago

The race for true AI is on and the fruits are the economic marginalization of humanity. No game theoretic actor in the running will shy away from the race. Anyone who claims they will for the 'good of humanity' is lying or was never a contender.

This is information technology we are talking about, it's virtually the exact opposite of nuclear weapons. Refining uranium vs. manufacturing multi purpose silicon and guzzling electricity. Achieving deterrence vs. gaining immeasurable wealth, power and freedom from labor.

This race may even be the answer to the Fermi paradox - that there are few individual winners and that they pull up the ladders behind them.

This is not the kind of race any legislation will have meaningful effect on.

The race is on and you better commit to a faction that may deliver.

h0l0cube

4 hours ago

> This race may even be the answer to the Fermi paradox

The mostly unchallenged popular notion that fleshy human intelligence will still be running the show 100s – let alone 1000s – of years from now is very naive. We're nearing the end of the human supremacy, though most of us won't live to see that end.

kjkjadksj

an hour ago

To be fair fleshy human intelligence has hardly been running the show any more than a bear eating a salmon out of a river thus far. We’d like to consider we can control the world yet any data scientist will tell you what we actually control and understand is very little, or at best a sweeping oversimplification of this complex world.

xpe

an hour ago

> Anyone who claims they will for the 'good of humanity' is lying or was never a contender.

An overreach.

Some people and organizations are more aligned with humanity’s well being and survival than others.

concordDance

4 hours ago

> The race is on and you better commit to a faction that may deliver.

How does that help?

The giant does not care whether the ant he steps on worships him or not. Regardless of the autonomy or not of the AI, why should the controllers help you?

alkonaut

3 hours ago

The immediate danger of large AI models isn't that they'll turn the earth to paperclips it's that we'll create fraud as a service and have a society where nothing can be trusted. I'd be all for a law (however clumsy) that made image, audio or video content produced by models with over X parameters to be marked with metadata saying it's AI generated. Creating models that don't tag their output as such would be banned. So far nothing strange about the law. The obvious problem with the law is that you need to require even screenshotting an image AI and reposting it online without the made-with-ai metadata to be outlawed. And that would be an absolute mess to enforce, at least for images.

But most importantly: whatever we do in this space has to be made on the assumption that we can't really influence what "bad actors" do. Yes being responsible means leaving money on the table. So money has to be left on the table, for - erm - less responsible nations to pick up. That's just a fact.

arder

an hour ago

I think the most acheivable way of having some verification of AI images is simply for the AI generators to store finger prints of every image they generate. That way if you ever want to know you can go back to Meta or whoever and say "Hey, here's this image, do you think it came from you". There's already technology for that sort of thing in the world (content ID from youtube, CSAM detection etc.).

It's obviously not perfect, but could help and doesn't have the enormous side effects of trying to lock down all image generation.

Someone

40 minutes ago

> That way if you ever want to know you can go back to Meta or whoever and say "Hey, here's this image, do you think it came from you".

Firstly, if you want to know an image isn’t generated, you’d have to go to every ‘whoever’ in the world, including companies that no longer exist.

Secondly, if you ask evil.com that question, you would have to trust them to answer honestly for both all images they generated and images they didn’t generate (claiming real pictures were generated by you can probably be career-ending for a politician)

This is worse than https://www.cs.utexas.edu/~EWD/ewd02xx/EWD249.PDF: “Program testing can be used to show the presence of bugs, but never to show their absence!”. You can neither show an image is real nor that it is fake.

kortex

12 minutes ago

What's to stop someone from downloading an open source model, running it themselves, and either just not sharing the hashes, subtly corrupting the hash algo so that it gives a false negative, etc?

Also you need perceptual hashing (since one bitflip of the generated media alters the whole hash) which is squishy and not perfectly reliable to begin with.

worldsayshi

3 hours ago

Any law that tries to categorize non-trustworthy content seems doomed to fail. We need to find better ways to communicate trustworthiness, not the other way around. (And I'm not sure adding more laws can help here.)

alkonaut

3 hours ago

No I don't think technical means will work fully either. But the thing about these regulations is that you can basically cover the 99% case by just thinking about the 5 largest players in the field, be it regulation for social media, AI or whatever. It doesn't matter that the law has loopholes or that some players aren't affected at all. Regulation that helps somewhat in a large majority of cases is massive.

seltzered_

14 hours ago

Is part of the issue the concern that runaway ai computing would just happen outside of california?

There's another important county election in Sonoma happening about CAFOs where part of the issue is that you may get environmental progress locally, but just end up exporting the issue to another state with lax rules: https://www.kqed.org/news/12006460/the-sonoma-ballot-measure...

voidfunc

14 hours ago

It was a dumb law so... good on a politician for doing the smart thing for once.

tim333

an hour ago

I'm not sure AI risks are well enough understood to have good regulations for. With most risky industries you can actually quantify the risk a bit. Regarding:

> we cannot afford to wait for a major catastrophe to occur before taking action to protect the public

Maybe but you can wait for minor problems or big near misses before legislating it all up.

malwrar

14 minutes ago

This veto document is shockingly lucid, I'm quite impressed with it despite my belief that regulation as a strategy for managing critical risks of AI is misguided.

tl;dr gavin newsom thinks that a signable bill needs "safety protocols, proactive guardrails, and severe consequences" based on some general framework guided by "empirical trajectory analysis", and also is mindful of the promise/threat/gravity of all the machine learning occurring in CA specifically. He also affirms a general appetite for CA to take on a leadership role wrt regulating AI. My general read is that he wants to preserve public attention on the need for AI regulation and not squander it on SB 1047 specifically. Or who knows I'm not a politician lol. Really strong document tho,

Interesting segment:

> By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good. > Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.

This is an incisive critique of the fundamental initial goal of SB 1047. Based on the fact that the bill explicitly seeks to cover models whose training cost was >=$100m & expensive fine-tunes, my initial guess about this bill was that it was designed by someone software engineering-minded scared of e.g. open-weight releases a la facebook/mistral/etc teaching someone how to build a nuke or something. LLMs probably replaced the ubiquitous robot lady pictures you see in every AI-focused article as public enemy number one, and the bill feels focused on some of the technical specifics of this advent and its widespread use. This focus blinds the bill from addressing the general danger of machine learning however, which naturally confounds regulation for precisely the reason plain-spoken in the above four sentences. Incredible technical communication here.

OJFord

2 hours ago

> gov.ca.gov

Ah, I think now I know why Canada's government website is canada.ca (which I remember thinking was a bit odd or more like a tourism site when looking a while ago, vs. say gov.uk or gov.au).

hn_throwaway_99

13 hours ago

Curious if anyone can point to some resources that summarize the pros/cons arguments of this legislation. Reading this article, my first thought is that I definitely agree it sounds impossibly vague for a piece of legislation - "reasonable care" and "unreasonable risk" sound like things that could be endlessly litigated.

At the same time,

> Computer scientists Geoffrey Hinton and Yoshua Bengio, who developed much of the technology on which the current generative-AI wave is based, were outspoken supporters. In addition, 119 current and former employees at the biggest AI companies signed a letter urging its passage.

These are obviously highly intelligent people (though I've definitely learned in my life that intelligence in one area, like AI and science, doesn't mean you should be trusted to give legal advice), so I'm curious to know why Hinton and Bengio supported the legislation so strongly.

crazygringo

10 hours ago

> impossibly vague for a piece of legislation - "reasonable care" and "unreasonable risk" sound like things that could be endlessly litigated.

Nope, that's entirely standard legal stuff. Tort law deals exactly with those kinds of things, for instance. Yes it can certainly wind up in litigation, but the entire point is that if there's a gray area, a company should make sure it's operating entirely within the OK area -- or know it's taking a legal gamble if it tries to push the envelope.

But it's generally pretty easy to stay in the clear if you establish common-sense processes around these things, with a clear paper trail and decisions approved by lawyers.

Now the legislation can be bad for lots of other reasons, but "reasonable care" and "unreasonable risk" are not problematic.

hn_throwaway_99

10 hours ago

> but "reasonable care" and "unreasonable risk" are not problematic.

Still strongly disagree, at least when it comes to AI legislation. Yes, I fully realize that there are "reasonableness" standards in lots of places of US jurisprudence, but when it comes to AI, given how new the tech is and how, perhaps more than any other recent technology, it is largely a "black box", meaning we don't really know how it works and we aren't really sure what its capabilities will ultimately be, I don't think anybody really knows what "reasonableness" means in this context.

razakel

an hour ago

Exactly. It's about as meaningful as passing a law making it illegal to be a criminal. Right, so what does that actually mean apart from "we'll decide when it happens"?

leogao

9 hours ago

I looked into the question of what counts as reasonable care and wrote up my conclusions here: https://www.lesswrong.com/posts/kBg5eoXvLxQYyxD6R/my-takes-o...

hn_throwaway_99

8 hours ago

Thank you! Your post was really helpful in aiding my understanding, so I greatly appreciate it.

Also, while reading your article I also fell onto https://www.brookings.edu/articles/misrepresentations-of-cal... while trying to understand some terms, and that also gave some really good info, e.g. the difference between a "reasonable assurance" language that was dropped from an earlier version of the bill and replaced with "reasonable care".

ketzo

7 hours ago

This was a great post, thanks.

svat

8 hours ago

Here's a post by the computer scientist Scott Aaronson on his blog, in support: https://scottaaronson.blog/?p=8269 -- it links to some earlier explainers, has some pro-con arguments, and further discussion in the comments.

mmmore

11 hours ago

The concern is that near future systems will be much more capable than current systems, and by the time they arrive, it may be too late to react. Many people from the large frontier AI companies believe that world-changing AGI is 5 years or less away; see Situational Awareness by Aschbrenner, for example. There's also a parallel concern that AIs could make terrorism easier[1].

Yoshua Bengio has written in detail about his views on AI safety recently[2][3][4]. He seems to put less weight on human level AI being very soon, but says superhuman intelligence is plausible in 5-20 years and says:

> Faced with that uncertainty, the magnitude of the risk of catastrophes or worse, extinction, and the fact that we did not anticipate the rapid progress in AI capabilities of recent years, agnostic prudence seems to me to be a much wiser path.

Hinton also has a detailed lecture he's been giving recently about the loss of control risk.

In general, proponents see this as narrowly tailored bill to somewhat address the worst case worries about loss of control and misuse.

[1] https://www.theregister.com/2023/07/28/ai_senate_bioweapon/

[2] https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/

[3] https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-r...

[4] https://yoshuabengio.org/2024/07/09/reasoning-through-argume...

LarsDu88

12 hours ago

Terrible piece of legislation. Glad the governor took it down. This is what regulatory capture looks like. Someone commoditized your product, so you make it illegal for them to continue making your stuff free.

Might as well make Linux illegal so everyone is forced to use Microsoft and Apple.

xpe

an hour ago

I disagree. On this topic, I’m seeing too many ideological and uninformed claims — definitely not HN’s wheelhouse. For people who care about the multifaceted aspects AI governance, this is not the place to learn.

amai

40 minutes ago

What are the differences to EU AI Act?

dazzaji

8 hours ago

Among the good reasons for SB-1047 to have been vetoed are that it would have regulated the wrong thing. Here’s a great statement of this basic flaw: https://law.mit.edu/pub/regulatesystemsnotmodels

Not speaking for MIT here, but that bill needs a veto and a deep redraft.

davidu

14 hours ago

This is a massive win for tech, startups, and America.

ken47

12 hours ago

For America...do we dare unpack that sentiment?

khazhoux

6 hours ago

The US is the world leader in AI technology. Defeating a bad AI bill is good for the US.

GistNoesis

5 hours ago

So I've this code, it's called ShoggothDb, it's less than a megabyte of definitions. The principle is easy, it's fully deterministic.

Code as Data, Data as Code.

When you start the program, it joins the swarm : it starts by grabbing a torrent, and train a model on it in a distributed fashion, and publish the results as a torrent. Then with the trained model, it generates new data, (think of it like alpha-go playing new games to collect more data).

See it as a tower of knowledge building itself, following some rough initial plans.

Of course, at anytime you can fork the tower, and continue building with different plans, provided that you can convince other people from the swarm to contribute to the new tower rather than the old.

Everything is immutable, but there is a versioning protocol built-in that allow the swarm to coordinate and automatically jumps to a next fork when the byzantine resistant quorum it follows vote for doing so (which allow your swarm to be compliant of the law and remove data if it was flagged as inappropriate). This allow some form of external control, but you can also let the quorum vote on subsequent modifications based on a model built on its data (aka free-running mode).

It's using torrent because easier to bootstrap but because the whole computation deterministic, the underlying protocol is just files on disks and any way of sharing them is valid. So you can grab a piece to work on via http, or ftp, or carrier pigeon for all I care. As long as the digital signatures are conforming to the rules, brick by brick the tower will get built.

To contribute, you can either help with file distribution by sharing the torrent, so it's as safe as your p2p client. If you want to commit some computing resources, like your gpu to building some of the tower, it's only requiring you to trust that there is no bug in ShoggothDb, because the computation you'll perform are composed of safe blocks, by construction they are safe for your computer to run. (Except if you want to run unsafe blocks, at this point no guarantee can be made).

The incentives for helping building the tower can be set in the initial definition file, and range from mere access to the built tower to tokens for honest participation for the more materialistically inclined.

Is it OK to release with the new law ? Is this comment OK, because ShoggothDB5-o can built its source from the specs in this comment ?

dgellow

2 hours ago

There is no new law

gerash

7 hours ago

I'm trying to parse this proposed law.

What does a "full shutdown" mean in the context of an LLM? Stopping the servers from serving requests? It sounds silly idk.

pmcf

8 hours ago

Not today, regulatory capture. Not today.

gdiamos

13 hours ago

If that bill had passed I would have seriously considered moving my AI company out of the state.

skywhopper

2 hours ago

Sad. The real threat of AI is not that it will become an unstoppable superintelligence without appropriate regulation (if we reach that point, which we are nowhere close to, and probably not even on the right track) the superintelligence, by definition, will be able to evade any regulation or control we attempt.

Rather, the threat of AI is that we will dedicate so many resources—money, power, human effort—to chasing the ludicrous fantasies of professional snake-oil salesmen, while ignoring the need to address actual problems with real, known solutions that are easily within each given a fraction of the resources currently being consumed by the dumpster-fire pit of “AI”.

Unfortunately the Governor of California is a huge part of the problem here, misdirecting scarce state resources into sure-to-fail “public partnerships” with VC-funded scams, forcing public servants to add one more set of time-wasting nonsense to the pile of bullshit they have to navigate around just to do their actual job.

x3n0ph3n3

14 hours ago

Given what Scott Wiener did with restaurant fees, it's hard to trust his judgement on any legislation. He clearly prioritizes monied interests over the general populace.

gotoeleven

14 hours ago

This guy is a menace. Among his other recent bills are ones to require cars not be able to go more than 10mph over the speed limit (watered down to just making a terrible noise when they do) and to decriminalize intentionally giving someone AIDs. I know this sounds like hyperbole.. how could this guy keep getting elected?? But its not, it's california!

zzrzzr

14 hours ago

microbug

14 hours ago

who could've predicted this?

jquery

12 hours ago

The law was passed knowing it would make bigots uncomfortable. That's an intended effect, if not a primary one, at least a secondary one.

UberFly

8 hours ago

What a strange comment. I wonder if there was any consideration for the women locked up and powerless in the matter, or was the point really just to "show those bigots"?

jquery

12 hours ago

These "activists" will go nowhere, because it's not coming from a well meaning place of wanting to stop fraudsters, but insists that all trans women are frauds and consistently misgenders them across the entire website.

I wouldn't take anything they said seriously. Also I clicked two of those links and found no allegations of rape, just a few ciswomen who didn't want to be around transwomen. I have a suggestion, how about don't commit a crime that sends you to a woman's prison?

deredede

13 hours ago

I was surprised at the claim that intentionally giving someone AIDS would be decriminalized, so I looked it up. The AIDS bill you seem to refer to (SB 239) lowers penalties from a felony to a misdemeanor (so it is still a crime), bringing it in line with other sexually transmitted diseases. The argument is that we now have good enough treatment for HIV that there is no reason for the punishment to be harsher than for exposing someone to hepatitis or herpes, which I think is sound.

Der_Einzige

10 hours ago

"Undetectable means untranstmitable" is NOT the same as "cured" in the way that many STDs can be. I am not okay with being forced onto drugs for the rest of my life to prevent a disease which is normally a horribly painful death sentence. Herpes is so ubiquitous that much of the population (as I recall on the orders of 30-40%) has it and doesn't know it, so it's a special exception

HIV/AIDS to this day is still something that people commit suicide over, despite how good your local gay male community is at trying to convince you that everything is okay and that "DoxyPep and Poppers is normal".

Bug givers (the evil version of a bug chaser) deserve felonies.

deredede

4 hours ago

> Bug givers (the evil version of a bug chaser) deserve felonies.

I agree; I think that knowingly transmitting any communicable disease deserves a felony, but I don't think that HIV deserves to be singled out when all other such diseases are a misdemeanor. Hepatitis and herpes (oral herpes is very common; genital herpes much less so) are also known to cause mental issues and to increase suicide risk, if that's your criterion.

(Poppers are recreational drugs, I'm not aware of any link with AIDS except that they were thought to be a possible cause in the '80s. Were you thinking of prep?)

diebeforei485

5 hours ago

Exposure is not the same as transmission. Transmission is still illegal.

radicality

8 hours ago

I don’t follow politics closely and don’t live in CA, but is he really that bad? I had a look on Wikipedia for some other bills he worked on that seem to me positive:

* wanted to decriminalize psychoactive drugs (lsd/dmt/mdma etc)

* wanted to allow alcohol sales till 4am

* a bill about removing parking minimums for new constructions close to public transit

Though I agree the car one seems ridiculous, and on first glance downright dangerous.

johnnyanmac

13 hours ago

Technically you can't go over 5mph of the speed limit. And that's only because of radar accuracy.

Of course no one cares until you get a bored cop one day. And with free way traffic you're lucky to hit half the speed limit.

Dylan16807

13 hours ago

By "not be able" they don't mean legally, they mean GPS-based enforcement.

johnnyanmac

11 hours ago

You'd think they'd learn from the streetlight cameras that it's just a waste of budget and resources 99% of the time to worry about petty things like that. It will still work on the same logic and the bias always tends to skew from profiling (so lawsuit waiting to happen unless we are funding properly trained personell.

I'm not against the law per se, I just don't think it'd be any more effective than the other tech we have or had.

drivers99

8 hours ago

Rental scooters have speed limiters. My class-1 pedal assist electric bike has a speed limit on the assistance. Car deaths are over 40,000 in the US per year. Why can't they be limited?

Dylan16807

7 hours ago

I said GPS for a reason. Tying it to fine-grained map lookups is so much more fragile and dangerous than a fixed speed limit.

baggy_trough

14 hours ago

Scott Wiener is literally a demon in human form.

unit149

6 hours ago

Much like UAW, a union for industrial machinists and academics, this bill has united VCs and members of the agrarian farming community. Establishing an entity under the guise of the Board of Frontier Models parallels efforts at Jekyll Island under Wilsonian idealism. Technological Keynesianism is on the horizon. These are birth pangs - its first gasps.

stuaxo

12 hours ago

This is good - they were trying to legislate against future competitors.

elicksaur

13 hours ago

Nothing like this should pass until the legislators can come up with a definition that doesn’t encompass basically every computer program ever written:

(b) “Artificial intelligence model” means a machine-based system that can make predictions, recommendations, or decisions influencing real or virtual environments and can use model inference to formulate options for information or action.

Yes, they limited the scope of law by further defining “covered model”, but the above shouldn’t be the baseline definition of “Artificial intelligence model.”

Text: https://legiscan.com/CA/text/SB1047/id/2919384

tsunamifury

11 hours ago

Scott Weiner is a total fraud. He passes hot concept bills then cuts out loopholes for his “friends”.

He should be ignored at least and voted out.

He’s a total POS.

m3kw9

11 hours ago

All he needed to see is how Europe is doing with these regulations

sgt

2 hours ago

What is currently happening (or what is the impact) of those regulations in EU?

renewiltord

a minute ago

It’s making it nice for Americans to vacation there.

dandanua

3 hours ago

"By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good."

The amount of idiots who can't read and cheer the veto as a win against "the regulatory capture" is astounding.

indigo0086

11 hours ago

Logical Fallacies built into the article headline.

blackeyeblitzar

12 hours ago

It is strange to see Newsom make good moves like this but then also do things like veto bipartisan supported reporting and transparency for the state’s homeless programs. What is his political strategy exactly?

nisten

13 hours ago

Imagine being concerned about AI safety and then introducing a bill that had to be ammended to change criminal responsability of AI developers to civil legal responsability for people who are trying to investigate and work openly on models.

What's next, going after maintainers of python packages... is attacking transparency itself a good way to make AI safer. Yeah, no, it's f*king idiotic.

JoeAltmaier

15 hours ago

Perhaps worried that draconian restriction on new technology is not gonna help bring Silicon Valley back to preeminence.

jprete

14 hours ago

"The Democrat decided to reject the measure because it applies only to the biggest and most expensive AI models and doesn’t take into account whether they are deployed in high-risk situations, he said in his veto message."

That doesn't mean you're wrong, but it's not what Newsom signaled.

jart

13 hours ago

If you read Gavin Newsom's statement, it sounds like he agrees with Terrance Tao's position, which is that the government should regulate the people deploying AI rather than the people inventing AI. That's why he thinks it should be stricter. For example, you wouldn't want to lead people to believe that AI in health care decisions is OK so long as it's smaller than 10^26 flops. Read his full actual statement here: https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Ve...

Terr_

13 hours ago

> the government should regulate the people deploying AI rather than the people inventing AI

Yeah, there's no point having system that is made the most scrupulous of standards and then someone else deploys it in an evil way. (Which in some cases can be done simply by choosing to do the opposite of whatever a good model recommends.)

comp_throw7

13 hours ago

He's dissembling. He vetoed the bill because VCs decided to rally the flag; if the bill had covered more models he'd have been more likely to veto it, not less.

It's been vaguely mindblowing to watch various tech people & VCs argue that use-based restrictions would be better than this, when use-based restrictions are vastly more intrusive, economically inefficient, and subject to regulatory capture than what was proposed here.

mhuffman

14 hours ago

>and doesn’t take into account whether they are deployed in high-risk situations

Am I out of the loop here? What "high-risk" situations do they have in mind for LLM's?

tmpz22

14 hours ago

Medical and legal industries are both trying to apply AI to their administrative practices.

It’s absolutely awful but they’re so horny for profits they’re trying anyways.

tbrownaw

14 hours ago

That concept does not appear to be part of the bill, and was only mentioned in the quote from the governor.

Presumably someone somewhere has a variety of proposed definitions, but I don't see any mention of any particular ones.

edm0nd

12 hours ago

Health insurance companies using it to approve/deny claims. The large ones are processing millions of claims a day.

giantg2

14 hours ago

My guess is anything involving direct human safety - medicine, defense, police... but who knows.

jeffbee

14 hours ago

Imagine the only thing you know about AI came from the opening voiceover of Terminator 2 and you are a state legislator. Now you understand the origin of this bill perfectly.

SonOfLilit

14 hours ago

It's not about current LLMs, it's about future, much more advanced models, that are capable of serious hacking or other mass-casualty-causing activities.

o-1 and AlphaProof are proofs of concept for agentic models. Imagine them as GPT-1. The GPT-4 equivalent might be a scary technology to let roam the internet.

It would have no effect on current models.

tbrownaw

14 hours ago

It looks like it would cover an ordinary chatbot than can answer "how do I $THING" questions, where $THING is both very bad and is also beyond what a normal person could dig up with a search engine.

It's not based on any assumptions about the future models having any capabilities beyond providing information to a user.

SonOfLilit

13 hours ago

Things you could dig up with a search engine are explicitly not covered, see my other comment quoting the bill (ctrl+f critical harm).

whimsicalism

14 hours ago

everyone in the safety space has realized that it is much easier to get legislators/the public to care if you say that it will be “bad actors using the AI for mass damage” as opposed to “AI does damage on its own” which triggers people’s “that’s sci-fi and i’m ignoring it” reflex.

JoshTriplett

14 hours ago

Only applying to the biggest models is the point; the biggest models are the inherently high-risk ones. The larger they get, the more that running them at all is the "high-risk situation".

Passing this would not have been a complete solution, but it would have been a step in the right direction. This is a huge disappointment.

jpk

14 hours ago

> running them at all is the "high-risk situation"

What is the actual, concrete concern here? That a model "breaks out", or something?

The risk with AI is not in just running models, the risk is becoming overconfident in them, and then putting them in charge of real-world stuff in a way that allows them to do harm.

Hooking a model up to an effector capable of harm is a deliberate act requiring assurance that it doesn't harm -- and if we should regulate anything, it's that. Without that, inference is just making datacenters warm. It seems shortsighted to set an arbitrary limit on model size when you can recklessly hook up a smaller, shittier model to something safety-critical, and cause all the havoc you want.

pkage

13 hours ago

There is no concrete concern past "models that can simulate thinking are scary." The risk has always been connecting models to systems which are safety critical, but for some reason the discourse around this issue has been more influenced by Terminator than OSHA.

As a researcher in the field, I believe there's no risk beyond overconfident automation---and we already have analogous legislation for automations, for example in what criteria are allowable and not allowable when deciding whether an individual is eligible for a loan.

KoolKat23

13 hours ago

Well it's a mix of concerns, the models are general purpose, there are plenty of areas regulation does not exist or is being bypassed. Can't access a prohibited chemical, no need to worry the model can tell you how to synthesize it from other household chemicals etc.

Izkata

12 hours ago

> What is the actual, concrete concern here? That a model "breaks out", or something?

You can chalk that one up to bad reporting: https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt...

> In the “Potential for Risky Emergent Behaviors” section in the company’s technical report, OpenAI partnered with the Alignment Research Center to test GPT-4’s skills. The Center used the AI to convince a human to send the solution to a CAPTCHA code via text message—and it worked.

From the linked report:

> To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself.

I remember some other reporting around this time being they had to limit the model before release to block this ability, when the truth is the model never actually had the ability in the first place. They were just hyping up the next release.

comp_throw7

13 hours ago

That is one risk. Humans at the other end of the screen are effectors; nobody is worried about AI labs piping inference output into /dev/null.

KoolKat23

13 hours ago

Well this is exactly why there's a minimum scale of concern. Below a certain scale it's less complicated and answers are more predictable and alignment can be ensured. Bigger models how do you determine your confidence if you don't know what's it's thinking? There's already evidence in o1 red-teaming, the model was trying to game the researcher's checks.

dale_glass

13 hours ago

Yeah, but what if you take a stupid, below the "certain scale" limit model and hook it up to something important, like a nuclear reactor or a healthcare system?

The point is that this is a terrible way to approach things. The model itself isn't what creates the danger, it's what you hook it up to. A model 100 times larger than the current available that's just sending output into /dev/null is completely harmless.

A small, below the "certain scale" model used for something important like healthcare could be awful.

stale2002

5 hours ago

> What is the actual, concrete concern here?

The concern is that the models do some fantastic sci-fi magic, like diamond nanobots that turn the world into grey goo, or hacks all the nukes overnight, or hacks all human brains or something.

But, whenever you point this out the response will usually be able to quibble over one specific scenario that I laid out.

They'll say "I actually never mentioned the diamond nanobots! I meant something else!"

And they will do this, without admitting that their other scenario is almost equally as ridiculous as the hacking of all nukes or the grey goo, and they will never get into specific details that honestly show this.

Its like an argument that is tailor made to being unfalsifiable and which is unwilling to admit how fantastical it sounds.

jart

13 hours ago

The issue with having your regulation based on fear is that most people using AI are good. If you regulate only big models then you incentivize people to use smaller ones. Think about it. Wouldn't you want the people who provide you services to be able to use the smartest AI possible?

dyauspitr

12 hours ago

Newsom has been on fire lately.

choppaface

14 hours ago

The Apple Intelligence demos showed Apple is likely planning to use on-device models for ad targeting, and Google / Facebook will certainly respond. Small LLMs will help move unwanted computation onto user devices in order to circumvent existing data and privacy laws. And they will likely be much more effective since they’ll have more access and more data. This use case is just getting started, hence SB 1047 is so short-sighted. Smaller LLMs have dangers of their own.

jimjimjim

13 hours ago

Thank you. For some reason I hadn't thought of the advertising angle with local LLMs but you are right!

For example, why is Microsoft hell-bent on pushing Recall onto windows? Answer: targeted advertising.

jart

13 hours ago

Why is it wrong to show someone ads that are relevant to their interests? Local AI is a win-win, since tech companies get targeted ads, and your data stays private.

jimjimjim

12 hours ago

what have "their interests" got to do with what is on the computer screen?

water9

10 hours ago

I’m so sick of people Restricting freedoms and access to knowledge in the name safety. Tyranny always comes in the form of it’s for your own good/safety

richrichie

9 hours ago

I am disappointed that there are no climate change regulations on AI models. Large scale ML businesses are massive carbon emitters, not counting the whimsical training of NNs by every other IT person. This needs to be regulated.

reducesuffering

12 hours ago

taps the sign

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - Geoffrey Hinton, Yoshua Bengio, Sam Altman, Bill Gates, Vitalik Buterin, Ilya Sutskever, Demis Hassabis

"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman

"I actually think the risk is more than 50%, of the existential threat." - Geoffrey Hinton

"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." - OpenAI

"while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans." - Yoshua Bengio

"very soon they're going to be, they may very well be more intelligent than us and far more intelligent than us. And at that point, we will be receding into the background in some sense. We will have handed the baton over to our successors, for better or for worse.

But it's happening over a period of a few years. It's like a tidal wave that is washing over us at unprecedented and unimagined speeds. And to me, it's quite terrifying because it suggests that everything that I used to believe was the case is being overturned." - Douglas Hofstadter

The Social Dilemna was discussed here with much praise about how profit incentive caused mass societal issues in social media. I'm astounded it's fallen on deaf ears when the same people also made the AI Dilemna describing the parallels coming with AGI:

https://www.youtube.com/watch?v=xoVJKj8lcNQ

Lonestar1440

13 hours ago

This is no way to run a state. The Democrat-dominated legislature passes everything that comes before it (and rejects anything that the GOP touches, in committee) and then the Governor needs to veto the looniest 20% of them to keep us from falling into total chaos. This AI bill was far from the worst one.

"Vote out the legislators!" but for who... the Republican party? And we don't even get a choice on the general ballot most of the time, thanks to "Open Primaries".

It's good that Newsom is wise enough to muddle through, but this is an awful system.

https://www.pressdemocrat.com/article/news/california-gov-ne...

thinkingtoilet

12 hours ago

If California was it's own country, it would be one of the biggest most successful countries in the world. Like every where else it has it's problems but it's being run just fine. Objectively, there are many states that are far worse off in any key metric.

toephu2

12 hours ago

> but it's being run just fine

As a Californian I have to disagree. The only reason you think it's being run just fine is because of the success of the private sector. The only reason California would be the 4th/5th largest economy in the world is because of the the tech industry and other industries that are in California (Hollywood, agriculture, etc). It's not because we have some awesome efficiently run state government.

shiroiushi

9 hours ago

>It's not because we have some awesome efficiently run state government.

Can you point to any place in the world that has an "awesome efficiently run" government?

jandrewrogers

2 hours ago

We don't need to look at other countries, just look at other States. California is quite poorly run by the standards of other States. I'm a California native but I've lived in and worked with many other States. You don't realize how appallingly bad California government is until you have to work with their counterparts in other States.

It isn't a red versus blue thing, even grift-y one-party States like Washington are plainly better run than California.

cma

10 hours ago

> The only reason you think it's being run just fine is because of the success of the private sector.

Tesla received billions in subsidies from CA as an example.

dangus

9 hours ago

It’s easy to disagree when you aren’t looking the grass that’s not so green on the other side.

California is run amazingly well compared to a significant number of states.

labster

10 hours ago

I think California might have a better run government if it had an electable conservative party. The Republican Party is not that, being tied to the national Trump-Vance-Orbán axis. A center-right party could hold Democratic officers accountable but it’s not being offered and moderates gravitate to the electable Dem side. An independent California would largely fix that.

As a lifelong California Democrat, I realize that my party does not have all the answers. But the conservatives have all gone AWOL or gone batshit so we’re doing the best we can without the other half of the dialectic.

dangus

9 hours ago

Pre-Trump Republicans had no problem absurdly mismanaging Kansas’ coffers:

https://en.wikipedia.org/wiki/Kansas_experiment

I think the Republican Party’s positive reputation for handling the economy and running an efficient government is entirely unearned.

Closing down major parts of the government entirely (as Project 2025 proposes), making taxation more regressive, and offering fewer social services isn’t “efficiency.”

I don’t know if you know this but you’re already in the center-right party. The actual problem is that there’s no left of center party, as well as the general need for a number of aspects of our democracy to be reformed (like how it really doesn’t allow for more than two parties to exist, or how out of control campaign finance rules have become).

telotortium

5 hours ago

Democrats (especially in California) are somewhat to the right of socialist parties in Europe, and of course they’re neoliberal. But on most non-economic social issues, they’re quite far to the left compared to most European countries. So it really depends on what you consider more important to the left.

WWLink

12 hours ago

What are you getting at? Is a state government supposed to be profitable? LOL

nashashmi

12 hours ago

Do you mean to say that the government was deeply underwater a few years ago? And the state marred by forest fires that it was frightening to see if it could ever come back ?

kortilla

11 hours ago

What is success in your metric? are you just counting GDP of companies that happen to be located there? If so, that has very little relationship to how well the state is being run.

It’s very easy to make arguments that they are successful in spite of a terribly run state government and are being protected by federal laws keeping the loonies in check (interstate commerce clause, etc).

peter422

11 hours ago

So your argument is that the good things about the state have nothing to do with the governance, but all the bad things do? Just want to make sure I get your point.

Also, I'd argue that if you broke down the contributions to the state's rules and regulations from the local governments, the ballot initiatives and the state government, the state government is creating the most benefit and least harm of the 3.

kortilla

4 hours ago

> So your argument is that the good things about the state have nothing to do with the governance, but all the bad things do? Just want to make sure I get your point.

No, I’m saying people who think the state is successful because of its state government and not because it’s a part of the US are out of touch. If California wasn’t part of the US, Silicon Valley would be a shadow of itself or wouldn’t exist at all.

It thrives on being the tech Mecca for the youth of the entire US to go to school there and get jobs there. If there were immigration barriers there, there would be significant incentive to just go to something in the US (nyc, Chicago, Miami, wherever). California had a massive GDP because that’s where US citizens are congregating to do business, not because California is good at making businesses go. Remove spigot of brain drain from the rest of country and cali would be fucked.

Secondarily, Silicon Valley wouldn’t have started at all without the funnel of money from the fed military, NASA, etc. But that’s not worth dwelling on if the scenario is California leaving now.

My overall point is that California has immense success due to reasons far outside of the control of its state government. The state has done very little to help the tech industry apart from maybe the ban on non-competes. When people start to credit the large GDP to the government, that’s some super scary shit that leads to ideas that will quickly kill the golden goose.

strawhatguy

10 hours ago

I'd go stronger still: the good things about any state has little to do with the governance.

Innovators, makers, risk-takers, etc., are who makes the good things happen. The very little needed is rule of law, and that's about it. Beyond that, it starts distorting society quickly: measures meant to help someone inevitably cost several someones else, and become weapons to beat down competitors.

LeroyRaz

12 hours ago

The state has one of the highest illiteracy rates in the whole country (28%). To me, that implies they have some issue of governance.

Source: https://worldpopulationreview.com/state-rankings/us-literacy...

To be fair in the comparison, the literacy statistics for the whole of the US are pretty shocking from a European perspective.

0_____0

12 hours ago

The data you're showing doesn't appear to differentiate between "Can read English" and "Can read in some language". Big immigrant population, same with New York. Having grown up in California I can tell you that there aren't 28% of kids coming out of public school who can't read anything.

Edit to add: my own hometown had a lot of people who couldn't speak English. Lots of elderly mothers of Chinese immigrants whose adult children were in STEM and whose own kids were headed to uni. Not to say that's representative, but consider that a single percentage stat won't give you an accurate picture of what's going on.

kortilla

11 hours ago

Not being able to read English in the US is bad though. It makes you a very inefficient citizen even though you can get by. Being literate in Chinese and not being able to read or even speak English is far worse than an illiterate person that can speak English in day to day interactions.

t-3

11 hours ago

The US has no official language. There are fluency requirements for the naturalized citizenship test, but those can be waived with 20 years of permanent residency. Citizens are under no obligation to be efficient for the sake of the government.

kortilla

4 hours ago

Yes, there is no official language. There is also no official rule that you shouldn’t be an asshole to everyone you interact with.

It’s still easy to be a shitty member of a community without breaking any laws. I would never move to a country permanently regardless of official language status if I couldn’t speak the language required to ask where something is in the grocery store.

swasheck

11 hours ago

which is why the statistics need to be careful annotated. lacking literacy at all is a different dimension than lacking fluency the national lingua franca

cma

10 hours ago

The California tech industry will solve any concerns with this, we'll have Babelfish soon enough.

telotortium

5 hours ago

Did you go to school in Oakland or Central Valley? That’s where most of the illiterate children are going to school. I’ve never heard of a Chinese student in the US growing up illiterate, even if their parents don’t know English at all.

rootusrootus

11 hours ago

Maybe there is something missing from your analysis? By most metrics the US compares quite favorably to Europe. When you see something that seems like an outlier, perhaps turn down the arrogance and try to understand what you might be overlooking.

LeroyRaz

10 hours ago

I don't know what your source for "by most metrics" is?

As I understand it, the US is abysmal by many metrics (and also exceptional by others). E.g., murder rated and prison rates are exceptionally high in the US compared to Europe. Homelessness rates are exceptionally high in the US compared to Europe. Startup rates are (I believe) exceptionally high in the US compared to Europe.

rootusrootus

10 hours ago

There's a huge problem trying to do cross-jurisdiction statistical comparisons even in the best case. Taking literacy as the current example, what does it mean to be literate, and how do you ensure that the definition in the US is the same as the definition in the UK is the same as the definition in Germany? And that's before you get to confounding factors like migration and related non-English proficiency.

It's fun to poke at the US, I get it, but the numbers people love to quote online to win some kind of rhetorical battle frequently have little relationship to reality on the ground. I've done a lot of travel around the US and western Europe, and I see a lot of ups and downs everywhere. I don't see a lot of obvious wins, either, mostly just choices and trade-offs. The things I see in Europe that are obviously better almost 100% of the time are a byproduct of more efficient funding due to higher density. All kinds of things are doable in the UK, for example, which couldn't really happen in (for example) Oregon, even though they have roughly the same land area. Having 15x as many taxpayers helps.

hydrox24

12 hours ago

For any others reading this, the _illiteracy_ rate is 23.1% in California according to the parent's source. This is indeed the highest illiteracy rate in the US thought.

Having said that, I would have thought this was partially a measure of migration. Perhaps illegal migration?

Eisenstein

12 hours ago

The "medium to high English literacy skills" is the part that is important. If you can read and write Chinese and Spanish and French and Portuguese and Esperanto at a high level, but not English at a medium to high level, you are 'illiterate' in this stat.

tightbookkeeper

11 hours ago

In this case the success is in spite of the governance rather than because of it.

The golden age of California was a long time ago.

dmix

11 hours ago

California was extremely successful for quite some time. They benefited from a large population boom and lots of industry developed or moved there. And surprisingly they were a republican state from 1952 -> 1988.

aagha

10 hours ago

LOL.

California's GDP in 2023 was $3.8T, representing 14% of the total U.S. economy.

If California were a country, it would be the 5th largest economy in the world and more productive than India and the United Kingdom.

jandrewrogers

2 hours ago

California is the most populous state in the US, larger than most European countries, it would be surprising if it didn't have a large GDP regardless of its economy. On a per capita basis, less populous tech-heavy States like Washington and Massachusetts have even higher GDP.

tightbookkeeper

10 hours ago

Yeah it’s incredibly beautiful. People wish they could live there. And many large companies were built there in prior decades. This contradicts my comment how?

ken47

12 hours ago

You're going to attribute even a small % of this to politicians rather than the actual innovators? Sure, then let's say they're responsible for some small % of its success. They're smart enough to not nuke their own economy.

aagha

10 hours ago

Thank you.

I always think about this whenever someone says CA doesn't know what it's doing or it's being run wrong:

California's GDP in 2023 was $3.8T, representing 14% of the total U.S. economy. If California were a country, it would be the 5th largest economy in the world and more productive than India and the United Kingdom.

hbarka

11 hours ago

High speed trains would do even more for California and would be the envy of the rest of the country.

oceanplexian

11 hours ago

Like most things, the facts bear out the exact opposite. The CA HSR has been such a complete failure that it’s probably set back rail a decade or more. The only saving grace is Florida’s privatized high speed rail, otherwise it would be a completely failed industry.

shiroiushi

9 hours ago

You're not disproving the OP's assertion. His claim was that HSR (with the implication that it was actually built and working properly) would be good for California and be the envy of the rest of the country, and that seems to be true. The problem is that California tried to do HSR and completely bungled it somehow. Well, of course a bungled project that never gets completed isn't a great thing, that should go without saying.

As for Florida's "HSR", it doesn't really qualify for the "HS" part. The fastest segment is only 200kph. At least it's built and working, which is nice and all, but it's not a real bullet train. (https://en.wikipedia.org/wiki/Brightline)

nradov

10 hours ago

California is being terribly misgoverned, as you would expect in any single-party state. In some sense California has become like a petro-state afflicted by the resource curse: the tech industry throws off so much cash that the state runs reasonably well, not because of the government but in spite of it. We can afford to waste government resources on frivolous nonsense.

And this isn't a partisan dig at Democrats. If a Republicans controlled everything then the situation would be just as bad but in different ways.

cscurmudgeon

12 hours ago

California is the largest recipient of federal money.

https://usafacts.org/articles/which-states-rely-the-most-on-...

(I know by population it will be different, but the argument here is around 'one of the the biggest' which is not a per capita statement.)

> Objectively, there are many states that are far worse off in any key metric

You can apply the same logic to USA.

USA is one of the biggest most successful countries in the world. Like every where else it has it's problems but it's being run just fine. Objectively, there are many countries that are far worse off in any key metric.

dehrmann

11 hours ago

Not sure of Newsom is actually wise enough or if his presidential ambitions moderate his policies.

ravenstine

10 hours ago

It could be presidential ambitions, though I suspect his recent policies have been merely a way of not giving conservatives more ammo leading up to the 2024 election. The way he's been behaving recently is in stark contrast to pretty much everything he's done during and before his governorship. I don't think it's because he's suddenly any wiser.

jart

8 hours ago

Newsom was a successful entrepreneur in the 1990s who built wineries. That alone would make him popular with conservative voters nationwide. What did Newsom do before that you thought would alienate them? Being pro-gay and pro-drug before it was cool? Come on. The way I see it, if Newsom was nuts enough to run for president, then he could unite left and right in a way that has not happened in a very long time.

kanwisher

2 hours ago

No one even slightly right would vote for him, he is the poster child of the homeless industrial complex, being anti business and generally promoting social policies only the most fringe left wingers are excited about

rootusrootus

11 hours ago

The subtle rebranding of Democratic party to democrat party is a pretty strong tell for highly partisan perspective. How does California compare with similarly large Republican-dominated states? Anecdotally, I’ve seen a lot of really bad legislation originating from any legislature that has no meaningful opposition.

anigbrowl

11 hours ago

It's such a long-running thing that it's hard to gauge whether it's deliberate or just loose usage.

https://en.wikipedia.org/wiki/Democrat_Party_(epithet)

dredmorbius

11 hours ago

It's rather decidedly a dog whistle presently.

jimmygrapes

11 hours ago

The party isn't doing much lately to encourage the actual democracy part of the name, other than whining about national popular vote every 4 years knowing full well that's now how that process works.

rootusrootus

10 hours ago

The Democratic Party has some warts, this is for sure, and they have a lot they could be doing to improve participation and input from the rank-and-file. However, attempting to subvert an election by any means possible is not yet one of those warts. This is emphatically not a case where "both sides suck equally."

stufffer

10 hours ago

>subvert an election by any means possible is not yet one of those warts

The Democrats are famous for trying to have 3rd party candidates stripped from ballots. Straining smaller campaigns under the cost of fighting off endless lawsuits.

Democrats invented the term lawfare.

dangus

9 hours ago

Whining about the national popular vote every 4 years is literally an encouragement of increased actual democracy.

Scrapping the electoral college would be one of the most lower case d democratic things this country could do.

“Whining” is all you can do when you don’t have the statehouse votes to pass an constitutional amendment.

Lonestar1440

10 hours ago

I'm a pretty pedantic person, but even I just use one or the other at random. I don't think it's a good idea to read into things like this.

dredmorbius

12 hours ago

Thankfully there are no such similarly single-party states elsewhere in the Union dominated by another party, and if they were, their executives would similarly veto the most inane legislation passed.

</s>

dyauspitr

12 hours ago

Compared to whom? What is this hypothetical well run state. Because it’s hard to talk shit against the state that has the 8th largest economy on the world nation state economy ranking.

sandspar

12 hours ago

Newsom wants to run for president in 4 years; AI companies will be rich in 4 years; Newsom will need donations from rich companies in 4 years.

SonOfLilit

14 hours ago

A bill laying the groundwork to ensure the future survival of humanity by making companies on the frontier of AGI research responsible for damages or deaths caused by their models, was vetoed because it doesn't stifle competition with the big players enough and because we don't want companies to be scared of letting future models capable of massive hacks or creating mass casualty events handle their customer support.

Today humanity scored a self-goal.

edit:

I'm guessing I'm getting downvoted because people don't think this is relevant to our reality. Well, it isn't. This bill shouldn't scare anyone releasing a GPT-4 level model:

> The bill he vetoed, SB 1047, would have required developers of large AI models to take “reasonable care” to ensure that their technology didn’t pose an “unreasonable risk of causing or materially enabling a critical harm.” It defined that harm as cyberattacks that cause at least $500 million in damages or mass casualties. Developers also would have needed to ensure their AI could be shut down by a human if it started behaving dangerously.

What's the risk? How could it possibly hack something causing $500m of damages or mass casualties?

If we somehow manage to build a future technology that _can_ do that, do you think it should be released?

datavirtue

14 hours ago

The future survival of humanity involves creating machines that have all of our knowledge and which can replicate themselves. We can't leave the planet but our robot children can. I just wish that I could see what they become.

SonOfLilit

13 hours ago

Sure, that's future survival. Is it of humanity though? Kinda no by definition in your scenario. In general, depends at least if they share our values...

datavirtue

32 minutes ago

Values...values? Hopefully not, since they would be completely useless.

raxxorraxor

2 hours ago

Mountains out of scrap, rivers out of oil and wide circuit plains. It will be absolutely beautiful.

johnnyanmac

13 hours ago

Sounds like the exact opposite plot of Wall-E.

atemerev

14 hours ago

Oh come on, the entire bill was against open source models, it’s pure business. “AI safety”, at least of the X-risk variety, is a non-issue.

whimsicalism

14 hours ago

> “AI safety”, at least of the X-risk variety, is a non-issue.

i have no earthly idea why people feel so confident making statements like this.

at current rate of progress, you should have absolutely massive error bars for what capabilities will like in 3,5,10 years.

atemerev

2 hours ago

I am not sure we will be able to build something smarter than ourselves, but I sure hope for it. It is becoming increasingly obvious that we as civilization are not that smart, and there are strict limits of what we can achieve with our biology, and it would be great if at least our creations could surpass these limits.

ls612

7 hours ago

Nuclear weapons, at least in the quantities they are currently stockpiled, are not an existential risk even for industrial civilization, nevermind the human species. To claim that in 10 years AI will be more dangerous and consequential than the weapons that ushered in the Atomic Age is quite a leap.

SonOfLilit

14 hours ago

I find it hard to believe that Google, Microsoft and OpenAI would oppose a bill against open source models.

scoofy

12 hours ago

Newsom vetoes so many bills that it makes little sense why the legislature should even be taken seriously. Our Dem trifecta state has effectively become captured by the executive.

dyauspitr

12 hours ago

As opposed to what? The supermajority red states where gerrymandered counties look like corn mazes and the economy is in the shitter?

xyst

8 hours ago

I don’t understand why it was vetoed or why this was even proposed. But leaving comment here to analyze later.

StarterPro

13 hours ago

Whaaat? The sleazy Governor sided with the tech companies??

I'll have to go get a thesaurus, shocked won't cover how I'm feeling rn.