yalogin
11 hours ago
The big issue isn’t even age verification. The end goal is verified user identification. They want every transaction on the internet to be associated with the exact identity of the user. No more anonymity.
In the short term the way it will be implemented is this — age verification will not be a binary, it will also want to push your DoB, name, location etc and they say “the choice is with the user” but the default will be to send everything. Very soon there will be services that require DoB or name or something else to gate new or existing functionality. That is the slippery slope it will be built as and that is how they win the game
totetsu
8 hours ago
It’s not very soon, it’s already the case that if one wants to enable the latest models in the OpenAI api you have to submit your details to their “identity provider”.
abracadaniel
8 hours ago
Which is why it’s important to be able to run models locally. Which also might explain the strategy behind buying all of the memory that is or will exist for at least a year out. Maybe we’ll eventually see AI safety be used to prevent people from running local models.
PeterStuer
2 hours ago
You mean having to sign into your Microsoft account to get your bootloader co-signed before your legally mandated TPU 3.1 allows you to install a govenment blessed and sufficiently telemetrized signed OS to "your" computer if you are on the whitelist of not-yet-misinformation-spreaders?
sandworm101
5 hours ago
+1 for local models. It also teaches users about how much energy they are using. One's perspective on 24/7 chatbots and agentic operating systems changes when you feel the heat coming from a rack of gpus.
(Spring is nearly here and my excuse about my rig also heating my house is about to end. Soon I will be paying extra to run my a/c as my rig pumps out a steady 1000w under load.)
1e1a
2 hours ago
You could use it to heat a tropical greenhouse.
gruez
7 hours ago
Given the recent mexican telecom hacks were allegedly done with significant help from openai/anthropic's chatbots, it seems at least somewhat prudent to require some sort of identity verification for API access? I'm struggling to see how this isn't the tech community's version of "no background checks for gun purchases" or "no KYC for bank accounts".
totetsu
3 hours ago
Is api access really really so extreme that it's italics worthy? Technology should be available to us in other roles than just passive consumer using front ends that might not suit what we need, or work against us in some way. Already I am giving a credit card to openai to use the service, but in addition now I have to hand my government ID over to withpersona.com. who are they? who are their investors? will the leak my information accidentally/accidentally-on-purpose/on-purpose? Okay maybe Rick Song and Persona Identities are genuinely trustworthy, but what happens when someone wants an exit in the future and they merge with palantir and now when i generate a picture i have to worry about being added to a target list for some automated kamakazi drone kill-chain a-la black mirror. Or if this becomes standard practice .. maybe its not Persona Inc. but i have to vet dozens of these companies and it becomes too hard. Rather than guns, this is more like Identity verification for pipe purchases from the hardware store because one could use it got build a rocket.
paradox460
7 hours ago
They were also likely done with keyboards and mice. Should we require id at point of purchase for those?
gruez
7 hours ago
Alright, so does that mean we don't need KYC for gun purchases or bank accounts either?
Of course you're probably going to say something about how guns and bank accounts are crucial components to crime, in which case the same holds for AI in the mexican telecoms hack.
roenxi
4 hours ago
> Alright, so does that mean we don't need KYC for ... bank accounts either?
That sounds reasonable. A bank can just be an institution that holds money for people; they don't need to be all over their customer's business. It is like a telecom not being responsible for what their customers say. In a simple sense banks don't need KYC.
BobbyJo
7 hours ago
What happens when everyone needs to use AI for their job? Genuine question that I think gets at the heart of the debate.
Once a common technology that everyone has access to becomes powerful enough to alter the lives of others on command, do we as a society just need to do away with the concept of anonymity? We are all just too powerful in isolation, and too much of a threat to the collective, that we cannot reasonably expect not to have some governing body watching at all times?
Today, you can buy parts/print a completely untraceable firearm, so do we license sales of steel tubing and 3D printers?
gruez
6 hours ago
>What happens when everyone needs to use AI for their job? Genuine question that I think gets at the heart of the debate.
Considering most places does direct deposit and that requires a bank account (so KYC), I don't see what's particularly new here. Many places also do background and/or work eligibility checks, which again is a form of KYC.
>Today, you can buy parts/print a completely untraceable firearm, so do we license sales of steel tubing and 3D printers?
Fortunately 3d printed guns are bad enough that it's not really an issue, although the bigger threat is probably CNC machines. However that's probably will get a pass, because they're eye-wateringly expensive compared to black market guns that nobody would bother.
AnthonyMouse
4 hours ago
> Considering most places does direct deposit and that requires a bank account (so KYC), I don't see what's particularly new here.
Slippery slope is a fallacy, they said.
> Many places also do background and/or work eligibility checks, which again is a form of KYC.
Except that it isn't KYC at all, both because employees aren't customers (most people are the employees of one company but the customers of hundreds or more), and because the majority of people don't have that requirement imposed on them by the government. There are many jobs you can get without a background check.
martin-t
6 hours ago
Just yesterday I thought about the right middle ground for KYC when buying guns.
The issue with centrally registering guns is than when you country is taken over by hostile forces (whether an invading army or a democratically elected abuser who turns it into a dictatorship), they know who has the guns and can force those people to surrender them (politely at first, authoritarians always use a salami slicing technique).
The issue with no controls is that even anti-social and mentally ill people can get them.
I wonder if the right middle ground could be:
- Sellers have to do their due diligence - require ID, proof of psychological examination, whatever else is deemed the right balance.
- Not doing due diligence means they get punishment equal to that for any offense committed with that gun.
- They might be required to mark/stamp the gun so that it can be traced back to them or have witnesses for the transfer.
AnthonyMouse
3 hours ago
The arguments for background checks generally have to be split into two separate classes of people.
The first is the mentally ill. Intuitively it seems desirable to say that someone undergoing treatment for e.g. depression shouldn't buy a gun. The problem here is the massive perverse incentive. If you're pretty depressed but you're not inclined to forfeit your ability to buy firearms, you now have a significant incentive to avoid seeking treatment. At which point you can still buy a gun but now your mental illness is going untreated, which is very worse than where we started.
The second is career criminals, i.e. people who have already been convicted of a crime and want to commit another one. The problem here is that career criminals... don't follow laws. If they want a gun they steal one or recruit someone without a criminal record into their gang etc., both of which are actually worse than just letting them buy one.
On top of that, when people get caught, prosecutors generally try to get them to testify against other criminals in exchange for a deal, who are then going to be pretty mad at them. Which gives them a much higher than average legitimate need to exercise their right to self-defense once they get back out. And then you get three independent bad outcomes: If they can't defend themselves they get killed for snitching, if they acquire a gun anyway so they don't then they could go back to prison even if they were otherwise trying to reform themselves, and if they think about this ahead of time or are advised of it by their lawyers then they'll be less likely to cooperate with prosecutors because the other two scenarios that are both bad for them only happen if they snitch.
Meanwhile the proposal was only ever expected to address a minority of the problem to begin with because plenty of the people who do bad things can pass the background check. And if you have a policy that doesn't even solve most of the original problem while creating several new ones, maybe it's just a bad idea?
watwut
2 hours ago
Third, non career violent people. Domestic violence or other interpersonal viole ce should prevent you from having a gun. Regardless of whether you are career criminal
watwut
2 hours ago
Personal guns have absolutely nothing with defense against "hostile forces'. That is pure fantasy.
Occasionally, gun owners are THE hostile force buying guns explicifely to bully and threaten. But that is about it, really.
rdevilla
5 hours ago
I hope someone takes those Meta glasses or an Oculus or Apple Vision or something and hooks it up to clearview or some other facial recognition service and agentically scrapes OSINT sources to doxx people on the street, in real time.
One glance and I have your full name, home address, SSN, all online handles and aliases, employment history, email, and phone number, instantaneously on a HUD. It doesn't even need to be marketed as "doxxing as a service;" it can just be marketed as "professional networking" or "social media." That way people will voluntarily submit their information and all rights over it to the platform.
Until people feel their privacy being viscerally raped on a minute to minute basis nothing will change.
sandworm101
4 hours ago
My black-mirror prediction for how augmented reality and AI will interact: In order of horribleness.
1> Auto-nude. Today we can "nudify" photos and videos. Soon, augemented reality glasses will be able to nudify eveyone in real time. (This is totally possible today.)
2> Auto-tranlation. Cool. Everyone can talk to everyone, but users will have censorship options. I don't much like hearing australians so I will just have the glasses make them all sound like proper Texans. And the sound of people with alternative views to my own are replaced with calming country music.
3> Lie detection. Glasses will look for facial/voice body ticks suggestive of deception. Good luck talking your way out of a ticket, or explaining to you boss how you were "sick", when they have a lie detector online 24/7.
4> Censorship of "bad" objects. Signs with ads or news that I do not agree with will be blocked and replaced with more appropriate text. Mosques will appear as churches. Garbage and pollution will become happy birds and clear blue skies. Homeless people will be replaced with attractive young people (see #1 above).
5> Race replacement. I don't like certain races. So my glasses now make everyone Chinese. So long as I don't turn off the glasses, I can live my custom racist utopia.
rdevilla
4 hours ago
This is great. I finally feel for the first time in my life that science has in fact gone too far. At this point living in the so-called "third world" to avoid digital-rape-as-a-service and the ever increasing pace of technology sounds eminently reasonable.
sandworm101
3 hours ago
I forgot about lip reading. Lots of possible evils if glasses can read lips.
BlackFly
4 hours ago
An account level flag in a user account on an operating system is the opposite of verified identification. It is self assertion by the owner of the computer: the parent. If such a control works in the same way as enterprise supervision the child won't be able to install a vpn, or other software to bypass the control.
laughing_man
3 hours ago
Yeah, none of this is about children. "Think of the children" is just a means to an end, and most likely what we'll find is even when we lose all pretense of anonymity somehow the kids will figure out a way to get access.
SarahC_
5 hours ago
IMAGINE A WAR.
Now - wouldn't a government LOVE to know who's saying what? Rather than shutting down the entire $$$$$ international corporate internet.
Money concerns as usual.
Buttons840
7 hours ago
Somehow they will eliminate anonymity for real people, but bots will still be pushing Russian or... some other country's interests with massive bot farms.
owisd
10 hours ago
If the end goal was user identification then the digital ID + zero knowledge proof age verification methods would be disallowed, which they aren't. https://blog.google/products-and-platforms/platforms/google-...
mindslight
10 hours ago
You got suckered by the marketing. Google's "zero knowledge" approach requires devices locked down with remote attestation, which prohibits end users from running their own code (when interacting with websites that prevent it, which as time goes on under this plan will be everywhere). The only actual difference here is that this is Google's desired approach to destroying anonymity and personal computing.
remcob
9 hours ago
Why is that required? The whole point of zero knowledge proofs is that it can run on untrusted devices.
Aurornis
8 hours ago
Because true “zero knowledge” proofs are actually useless for age gating purposes.
Conceptually, if a proof was truly zero knowledge and there were no restrictions on generating it, there would also be nothing stopping someone from launching a website where you clicked a button and were given a free token generated from their ID. If it was truly a zero knowledge proof it would be impossible to revoke the ID that generated it, so there is no disincentive to freely share IDs.
So every real world “zero knowledge” proof eventually restricts something. Some require you to request your tokens from a government entity. Others try to do hardware attention chains so theoretically you can’t generate them outside of the approved means.
But the hacker fantasy of truly zero knowledge proofs is impossible because 1 hour after launch there would be a dozen “Show HN” posts with vibe coded websites that dispense zero knowledge tokens.
AnthonyMouse
4 hours ago
It's also unclear what they'd even be useful for to begin with.
You need some kind of proof system if you need a central authority to certify something, but why is that required? The parents know the age of their kids. They don't need the government to certify that to them. And then the parents can get the kids a device that allows them to set age restrictions.
Whether those restrictions are imposed by the device on content it displays (which is the correct way to do it) or by the device telling the service the approximate age of the user (which needlessly leaks information), you don't actually need a central authority to certify anything to begin with because either way it's just a configuration setting in the child's device.
gbear605
9 hours ago
You’d have to ask Google