srndsnd
a day ago
To me, what's missing from that set of recommendations is some method to increase the liability of companies who mishandle user data.
It is insane to me that I can be notified via physical mail of months old data breaches, some of which contained my Social Security number, and that my only recourse is to set credit freezes from multiple credit bureaus.
pkphilip
32 minutes ago
But reveal any "classified" information about the govt and you will end up in jail. The severe asymmetry between what a citizen can do and what the govt gives itself the right to do is crazy.
closeparen
2 hours ago
Shared secrets are criminally negligent security architecture in 2024. We can authenticate identity and authorize payment without giving the relying party a token to leak or abuse. The energy behind this problem is good, but "everyone try harder to protect the shared secrets entrusted to you" would be a tragic waste of it.
DoctorOetker
2 hours ago
> [...] would be a tragic waste of it.
The first time would have been a tragedy, from then on it has been farce after farce.
Imagine a world where companies would have to prove the necessity of storing specific factoids. It would only take 1 security researcher to prove it being unnecessary, invalidating that class of "legitimate interests".
Today this value judgement happens in human brains, like the (correct) judgement in your comment. If we want to scale it objectively we would have to switch to formal verification. A whole industry of compliance checking could come to exist where a company wants to get its operations screened for compliance issues, so as not to suffer criminal negligence penalties.
closeparen
an hour ago
The problem here is the payments industry (continuing to issue and accept "credit card numbers") and the voters (refusing to authorize a proper national ID). An individual entity that has to conduct business under these circumstances has no real alternative.
You are not being harmed by the storage or leakage of a few bytes, that's ridiculous. You are being harmed by the financial industry and government's insistence that knowledge of these bytes is sufficient to take your property or hold a debt against you.
runjake
14 hours ago
YMMV, but it took me 15 minutes start to finish to freeze my credit with the 3 bureaus using the following instructions.
https://www.nerdwallet.com/article/finance/how-to-freeze-cre...
tombrossman
3 hours ago
YMMV indeed.
Since moving overseas 15 years ago, I tried numerous times and it simply is not possible. All the forms require a U.S. mailing address to register. Same for online access to your Social Security account.
There are an estimated 10 million Americans living overseas. Taken together, we are the equivalent of the 11th largest state. All of us completely blind to what is happening with our credit record and Social Security account.
At this point I think the only way this gets fixed is massive fraud/exploitation by organized crime, so these organizations finally address the problem.
wwmiller3
8 hours ago
Unfortunately, that isn’t enough to mitigate identity theft. Someone leveraging the recent National Public Data breach opened a checking and savings account using my identity (no credit checks are performed in doing so) then committed wire fraud using accounts.
saagarjha
11 hours ago
Ok, but this is something that shouldn't be my problem. And it's not just that; I have to go unfreeze it if someone needs to run a credit check.
runjake
12 minutes ago
Right, but you've got to do what's within your control, unless you're planning a Senate campaign and plan to resist significant and lucrative lobbying operations against you.
bhhaskin
17 hours ago
If your identity gets stolen, you should be able to sue all the companies that had a leak.
nimbius
17 hours ago
I think the only reason were seeing this revelation from a federal agency after 20 years is to boost the governments case against tiktok.
bilekas
a day ago
> To me, what's missing from that set of recommendations is some method to increase the liability of companies who mishandle user data.
As nice as this is on paper, it will never happen, lobbyist exists. Not to be tinfoil hat but why would any lawmaker slap the hand that feeds them.
Until there is an independent governing body which is permitted to regulate over the tech industry as a whole it wont happen. Consider the FDA, they decide which drugs and ingredients are allowed and that's all fine. There could be a regulating body which could determine the risk to people's mental health for example from 'features' of tech companies etc. But getting that body created will require a tragedy. Like why the FDA was created in the first place. [1]
That's just my 2cents.
1 : https://www.fda.gov/about-fda/fda-history/milestones-us-food....
Aerroon
18 hours ago
>There could be a regulating body which could determine the risk to people's mental health for example from 'features' of tech companies etc.
I think ideas like this is why it's not going to happen.
Our understanding of mental health is garbage. Psychiatry used to be full of quackery and very well still might be. Treatment for something like depression boils down to "let's try drug in a random order until one works". It's a field where a coin-flip rivals the accuracy of studies. Therefore any regulating body on that will just be political. It will be all about the regulators "doing something" because somebody wrote enough articles (propaganda).
Problems like this are why people aren't interested in supporting such endeavors.
intended
14 hours ago
That is not the treatment for depression.
this argument reduces mental health to medication, which leaves aside everything from the history of mental health (asylums, witch burnings to today), leaps in medicine (from lobotomies, to SNRIs, bipolar meds and more), to simply better diagnoses.
There are certainly tons of people here who have benefited from mental health professionals - overextending the flaws in psych simply to dismiss the idea of a watchdog is several unsupported arguments too far.
night862
11 hours ago
I disagree, in brief because the practical side of psychiatry is medication-dominated, mostly because medical research is difficult and expensive.
There are some non-medication treatments for some psychiatric symptoms such as those caused by trauma (Prominently, EMDR) that some hail as actual cures, and even maybe depression (I am clearly not a doctor.) but in the case of depression I think you'll find its quite medication-heavy.
The reason for this is that psychiatrists are Medical Doctors and Psychiatry is a medical field which is of course bounded by the means of medical science. This is not to say there is some "magic" at work which science could never understand--not at all. It is merely the case that medical doctors are a research paper oriented bunch, and most of the medical research which makes it into practice is either relating to anatomy or pharmaceutical interventions.
Most of the treatments we have are pharmaceutical medications because most of our research dollars have gone into pharmaceutical research.
I decided to edit this comment to add: In my personal opinion, is probable that psychiatrists et all, writ large as it were, have already figured out how to cure depression. Only, we cannot really manage to employ it because it isn't a pill, therapy, device or surgery.
throwaway92024
2 minutes ago
Ironically, the proposed biological mechanism behind EMDR is totally incorrect, and everyone knows it, outside of some diehard polyvagal theory (also totally bunk) adherents. But the treatments do work for a lot of people, probably because it's just exposure therapy by another name.
pwillia7
6 hours ago
Psychiatry is useful in the way Statistics is useful for math models we don't fully understand. Statistics let's us get at answers with enough data even though we don't really understand the underlying model at play.
There a whole host of 'sciences' that are kind of 2nd tier like this, Psychiatry being one of them. Once we understand enough Neuroscience, it's likely to me Psychiatry will get consumed by Neuroscience which will splinter into more useful for day to day life categories as it grows (like a psychiatrist)
Super book on the subject and also talks about the rising bar for individual culpability as we understand more about the brain: https://www.amazon.com/Incognito-Secret-Lives-David-Eagleman...
pwillia7
6 hours ago
Through civil disobedience is the only way stuff like this happens in America. You're right about the incentives to those in power, but how do you think we got emancipation2, women's suffrage, organized labor rights, prohibition and the end to prohibition?
samfundev
2 hours ago
While I also worry about lobbying, we'll have to lobby harder.
layer8
a day ago
I’m completely sympathetic to making companies more liable for data security. However, until data breaches regularly lead to severe outcomes for subjects whose personal data was leaked, and those outcomes can be causally linked to the breaches in an indisputable manner, it seems unlikely for such legislation to be passed.
wepple
21 hours ago
I forgot where I saw this, but the US govt recently announced that they see mass PII theft as a legitimate national security issue.
It’s not just that you or I will be inconvenienced with a bit more fraud or email spam, but rather that large nation state adversaries having huge volumes of data on the whole population can be a significant strategic advantage
And so far we typically see email+password+ssn be the worst data leaked; I expect attackers will put in more effort to get better data where possible. Images, messages, gps locations, etc
kragen
19 hours ago
yes, privacy is not an individual problem; it's a civil defense problem, and not just when your opponent is a nation-state. we already saw this in 02015 during the daesh capture of mosul; here's the entry from my bookmarks file:
https://www.facebook.com/dwight.crow/media_set?set=a.1010475... “#Weaponry and morale determine outcomes. The 2nd largest city of Iraq (Mosul) fell when 1k ISIS fighters attacked “60k” Iraqi army. 40k soldiers were artifacts of embezzlement, and of 20k real only 1.5k fought - these mostly the AK47 armed local police. An AK47 loses to a 12.7mm machine gun and armored suicide vehicle bombs. Finally, the attack was personal - soldiers received calls mid-fight threatening relatives by name and address. One army captain did not leave quickly enough and had two teenage sons executed.” #violence #Iraq #daesh
of course the americans used this kind of personalized approach extensively in afghanistan, and the israelis are using it today in lebanon and gaza, and while it hasn't been as successful as they hoped in gaza, hamas doesn't exactly seem to be winning either. it's an asymmetric weapon which will cripple "developed" countries with their extensive databases of personal information
why would a politician go to war in the first place if the adversary has the photos and imeis of their spouse, siblings, and children, so they have a good chance of knowing where they are at all times, and the politician can't hope to protect them all from targeted assassination?
the policy changes needed to defend against this kind of attack are far too extreme to be politically viable. they need to be effective at preventing the mere existence of databases like facebook's social graph and 'the work number', even in the hands of the government. many more digital pearl harbors like the one we saw this week in lebanon will therefore ensue; countries with facebook, credit bureaus, and national identity cards are inevitably defenseless
imposing liability on companies whose data is stolen is a completely ineffective measure. first, there's no point in punishing people for things they can't prevent; databases are going to get stolen if they're in a computer. second, the damage done even at a personal level can vastly exceed the recoverable assets of the company that accumulated the database. third, if a company's database leaking got your government overthrown by the zetas or daesh, what court are you going to sue the company in? one operated by the new government?
treypitt
18 hours ago
Are you saying you think more critical government databases than OPM or security clearance rosters are inevitably going to be breached? I'd like to think the government or corporation can effectively protect some databases at least...
kragen
17 hours ago
those are already pretty bad, but i think the really dangerous ones are things like verizon's billing records and customer location history, credit card transaction histories, license plate registrations, credit bureau histories, passport biometrics, enough voice recordings from each person for a deepfake, public twitter postings, etc.
consider https://en.wikipedia.org/wiki/1943_bombing_of_the_Amsterdam_...:
> The 1943 bombing of the Amsterdam civil registry office was an attempt by members of the Dutch resistance to destroy the Amsterdam civil registry (bevolkingsregister), in order to prevent the German occupiers from identifying Jews and others marked for persecution, arrest or forced labour. The March 1943 assault was only partially successful, and led to the execution of 12 participants. Nevertheless, the action likely saved many Jews from arrest and deportation to Nazi extermination camps.
to avoid partisan debate, imagine a neo-nazi group takes over the us, which presumably we can all agree would be very bad. after they took over, how hard would it be for them to find all the jews? not just make a list of them, but physically find them? (much easier than it was in 01943, i'm sure we can agree.) how hard would it be for them to find all the outspoken anti-fascists? where could those anti-fascists hide?
now, step it up a notch. how hard would it be for them to find all the jews before they take over? it wouldn't be that hard if the databases leak. and if you feel safe because you're not jewish, rest assured that neo-nazis aren't the only groups who are willing to use violence for political ends. someone out there wants you dead simply because of the demographic groups you belong to. the reason you haven't been seeing widespread political violence previously is that it hasn't been a winning strategy
the situation is changing very fast
intended
14 hours ago
Hey, on a long enough timeline the answer will tend towards yes.
Do note, that this isn’t just an Americas problem.
Your data is probably on DBs in other nations.
Plus - the playbook is to target weaker nations and then use them for staging grounds to target stronger nations.
dantheman
19 hours ago
Perhaps you're not aware of https://en.wikipedia.org/wiki/Office_of_Personnel_Management...
wepple
18 hours ago
Very aware of that. That to me seemed like a targeted attack by a tracked APT group. What I’m referring to above is that the more vanilla attacks (ex: popular online mattress store gets popped) actually have national security implications, despite seeming like just an inconvenience
grugq
12 hours ago
> Even minutiae should have a place in our collection, for things of a seemingly trifling nature, when enjoined with others of a more serious cast, may lead to valuable conclusion.
— George Washington.
EasyMark
19 hours ago
They’d need a lot less security if they stopped spying on us and saving all of our most critical ID data, period.
deegles
17 hours ago
Nearly everyone's data has been leaked already. Any strong protections would only protect people who haven't been born yet imo.
Onavo
21 hours ago
Then instead of regulating the companies, make SSN easily revokable and unique per service. I don't understand why Americans are so oppposed to a national ID despite the fact that every KYC service use SSNs and driver licenses.
candiddevmike
21 hours ago
Because they're the mark of the beast or a step towards fascism or something.
I don't think it would take much to convert real IDs into a national ID, they are as close to as they can get without "freaking people out".
Nevermark
18 hours ago
Emphasizing that the number can be changed would really help there.
People could even generate their own number (private key), which they never gave out, and appeared differently to each account manager verifying it, and still replace them.
When you choose your own number, it's only the Mark of the Beast if you are the Beast! * **
* 666, 13, 69 and 5318008 expressly prohibited.
** Our offices only provide temporary tattoos.
mapt
21 hours ago
The expansion of KYC and the hegemonic dominance of our global financial intelligence network is a recent infringement on our privacy that would not necessarily pass popular muster if it became well-known.
Most of our population is still living in a headspace where transactions are effectively private and untraceable, from the cash era, and has not considered all the ways that the end of this system makes them potential prey.
The fact is that the market is demanding a way to identify you both publicly and privately, and it will use whatever it needs to, including something fragile like a telephone number 2fa where you have no recourse when something goes wrong. It's already got a covert file on you a mile long, far more detailed than anything the intelligence agencies have bothered putting together. The political manifestation of anti-ID libertarians is wildly off base.
monksy
15 hours ago
The concern about organizations and the governments feelings that it needs to track you is a very valid concern. Why does the government need to make sure your "hand job from a friend" venmo payment to your friend is "legally legit"? (You can get transactions flagged for this and the moderator will shame you)
Are you correct in what's going on? Yes. Are we placed in this with no option to resist? For the most part yes.
mapt
21 hours ago
"What fraction of the FBI and CIA do the Communists have blackmail material on?"
arminiusreturns
a day ago
I agree. Let me tell you about what just happened to me. After a very public burnout and spiral, a friend rescued me and I took a part time gig helping a credit card processing company. About 2 months ago, the owner needed something done while I was out, and got their uber driver to send an email. They emailed the entire customer database, including bank accounts, socials, names, addresses, finance data, to a single customer. When I found out, (was kept hidden from me for 11 days) I said "This is a big deal, here are all the remediations and besides PCI we have 45 days by law to notify affected customers." The owner said "we aren't going to do that", and thus I had to turn in my resignation and am now unemployed again.
So me trying to do the right thing, am now scrambling for work, while the offender pretends nothing happened while potentially violating the entire customer base, and will likely suffer no penalty unless I report it to PCI, which I would get no reward for.
Why is it everywhere I go management is always doing shady stuff. I just want to do linuxy/datacentery things for someone who's honest... /cry
My mega side project isn't close enough to do a premature launch yet. Despite my entire plan being to forgo VC/investors, I'm now considering compromising.
aftbit
a day ago
>Why is it everywhere I go management is always doing shady stuff.
Well here's a cynical take on this - management is playing the business game at a higher level than you. "Shady stuff" is the natural outcome of profit motivation. Our society is fundamentally corrupt. It is designed to use the power of coercive force to protect the rights and possessions of the rich against the threat of violence by the poor. The only way to engage with it AND keep your hands clean is to be in a position that lets you blind yourself to the problem. At the end of the day, we are all still complicit in enabling slave labor and are beneficiaries of policies that harm the poor and our environment in order to enrich our lives.
>unless I report it to PCI, which I would get no reward for.
You may be looking at that backwards. Unless you report it to PCI, you are still complicit in the mishandling of the breach, even though you resigned. You might have been better off reporting it over the owner's objections, then claiming whistleblower protections if they tried to terminate you.
This is not legal advice, I am not a lawyer, I am not your lawyer, etc.
arminiusreturns
a day ago
I did verify with an attorney that since I wasn't involved and made sure the owner knew what was what, that I had no legal obligations to disclose.
HansardExpert
9 hours ago
What about your moral obligation?
positus
a day ago
The problem isn't society or profit motivation. It's people. Humanity itself is corrupt. There aren't "good people" and "bad people". There's only "bad people." We're all bad people, just some of us are more comfortable with our corruption being visible to others to a higher degree.
ragnese
a day ago
> We're all bad people, just some of us are more comfortable with our corruption being visible to others to a higher degree.
If the GP's story is true (and I have no reason to suspect otherwise), then there are clearly differences in the degree of "badness" between people. GP chose to resign from his job, while his manager chose to be negligent and dishonest.
So, even if we're all bad people, there are less bad and more bad people, so we might as well call the less bad end of the spectrum "good". Thus, there are good and bad people.
positus
15 hours ago
I understand your perspective, but I maintain that "good" (morally pure) isn't a category any of us belong to. We're all lying, hateful people to one extent or another, and lying hateful people aren't "good", even if we haven't lied or hated as much as other lying, hateful people. "Less evil" isn't synonymous with "good".
The argument that profit motivation is the origin of shady business practices ignores the existence of those businesses which pursue profit in an ethical manner. The company I work for, for instance, is highly motivated to produce a profit, but the way we go about obtaining that profit is by providing our customers with products that have real value, at fair (and competitive) prices, and by providing consistently excellent customer support. Our customers are *very* satisfied with our products and services, and they show their satisfaction with extreme brand loyalty. The profit we make year over year allows us to increase the quality of life for our employees, and keeps our employees highly motivated towards serving our customers. We pursue the good of our customers alongside our own, and we avoid shady business practices like the plague.
idle_zealot
6 hours ago
What is this even supposed to mean? Profit motivation is a concept invented by humans for humans to apply. If it leads to unexpected or undesirable outcomes then it's a bad idea. A system that requires all participants be paragons of some definition of virtue to produce good results is fundamentally unsuited for human beings.
ValentinA23
a day ago
The DOJ has just launched a corporate whistleblower program, you should look into it maybe it covers your case:
https://www.justice.gov/criminal/criminal-division-corporate...
>As described in more detail in the program guidance, the information must relate to one of the following areas: (1) certain crimes involving financial institutions, from traditional banks to cryptocurrency businesses; (2) foreign corruption involving misconduct by companies; (3) domestic corruption involving misconduct by companies; or (4) health care fraud schemes involving private insurance plans.
>If the information a whistleblower submits results in a successful prosecution that includes criminal or civil forfeiture, the whistleblower may be eligible to receive an award of a percentage of the forfeited assets, depending on considerations set out in the program guidance. If you have information to report, please fill out the intake form below and submit your information via CorporateWhistleblower@usdoj.gov. Submissions are confidential to the fullest extent of the law.
TinyRick
a day ago
Why would you resign? You could have reported it yourself and then you would have whistleblower protections - if the company retaliated against you (e.g. fired you), you then would have had a strong lawsuit.
arminiusreturns
a day ago
Because I don't want to be associated with companies that break the law and violate regulations knowingly. I've long had a reputation of integrity, and it's one of the few things I have left having almost nothing else.
TinyRick
a day ago
So you would rather be known as someone who had an opportunity to report a violation, and chose not to? From my perspective it seem like you decided against acting with integrity in this situation - the moral thing would have been to report the violation, but you chose to look the other way and resign.
1659447091
18 hours ago
> it seem like you decided against acting with integrity in this situation ... you chose to look the other way and resign.
I agree with this statement.
This isn't a judgement, we all have to make choices; the "right" choice (the one that aligns with integrity) is usually the one that will be the least self-serving and even temporarily harmful. They did what was right for them, that's okay, but it was not the choice of integrity.
Dylan16807
16 hours ago
How is quitting right for them? They chose a path that's bad for the users and bad for them.
1659447091
15 hours ago
Because that is the choice they made for themselves.
How it plays out after is another matter entirely. But the choice was what they seemed to think was right, for them, at the time. Thus it was the right choice for them. It doesn't mean it was the right choice in terms of integrity, or the right choice for me, or you or anyone whose data got caught up in it. Nor was it right choice in receiving a paycheck the next week.
But the way it was explained, it doesn't seem like they went out of their way to pick a "wrong" choice, specifically. They picked what they felt was the right one, for them, at that time. There were less ethical options to choose as well, and those were not picked either.
Dylan16807
15 hours ago
Someone choosing an action does not at all mean it's the right choice for them.
1659447091
15 hours ago
I believe we are talking two separate things.
You appear to be talking about the external consequences of choices, while I am talking about them making a choice based on what they believed was the inner rightness of their choice. They did not want to be associated with a company like that, so they made the choice to not be -- because it aligned with their inner knowing of not wanting to be a part of that company. The right or wrongness in terms of external consequences is not what makes the choice, right or wrong -- for them
Dylan16807
14 hours ago
But they left the vast majority of the morality on the table. They even talked to a lawyer to avoid reporting. So in the sense of making the choice that aligns with inner rightness and makes them moral, they still made a bad choice.
1659447091
13 hours ago
> making the choice that aligns with inner rightness
Again, I am talking about -- them -- not anyone one else or what anyone else thinks of it outside of them. I am not talking about "inner rightness" in general, I am talking "what they believed was the inner rightness of their choice" -- Their inner rightness. You seem to be talking about what -- you and/or others -- may believe from an outside perspective. My outside perspective is they made the choice that did not align with integrity. But that does not mean that was not the right choice for them.
And again, they made the right choice, for them -- at that time. How that plays out after is neither here nor there and in your labeling it a "bad" choice for them is akin to saying that they have no real agency over their choices, and we outside of them are the final say in what is good or bad for that person.
Dylan16807
8 hours ago
Again, I am talking about things internal to that specific person just as much as you are. Not external anything.
You are trying to focus on what they believed in that moment, but I see no reason to use that in an analysis of whether their actions fit their own morals. Sometimes people make mistakes even by their own rules. If we only care about what someone thought right in the heat of the moment, that category of mistake would be impossible, and it's not impossible. Saying that mistakes are possible is not overriding agency.
The core of it is in this line "the choice was what they seemed to think was right, for them, at the time. Thus it was the right choice for them". I don't agree with that logic at all. Humans are not good enough at following their own motivations and principles. They are impulsive and bad at analysis. You can't assume that their choices will always be consistent with their personal parameters of right and wrong.
Also, saying I think someone made a mistake is not denying agency. Don't be so melodramatic. Nowhere am I claiming to have the final say. I merely have the right to an opinion.
1659447091
4 hours ago
I was never talking about if they made a mistake or not. That is after the fact and outside the scope of what I have been saying. I know it matters, but that is not within the scope of my first comment that started this.
I took the little information they gave and from that the only true logical conclusion was they made the right choice for them at that moment. Full Stop.
You’re the one bringing the extra opinions into the matter and reading into a simple thing far too much. Most of the above I agree with you on outside of this particular thread. It has nothing to do with the very narrow scope of my original comment and attempted clarification.
Neither of us can know 100% what was right or wrong for them in that moment, but based on the information of A. no longer feeling right about being associated with a place for reasons that they deemed important enough to come to this conclusion — and B. aligning actions with that inner knowledge; makes it the right action (choice) for that person. If they changed their mind later, it does not change the immutable facts of that moment. It simply provides a new set of choices and options that is outside the scope of my original comment.
Dylan16807
3 hours ago
> I was never talking about if they made a mistake or not. That is after the fact and outside the scope of what I have been saying. I know it matters, but that is not within the scope of my first comment that started this.
When I say mistake here, I specifically mean "mistake as far as their goal of making the right choice". And I mean that in the moment, using knowledge they have at that time, just like you're defining "right choice". Nothing after the fact nor outside the scope.
> I took the little information they gave and from that the only true logical conclusion was they made the right choice for them at that moment. Full Stop.
I don't see how they gave enough information to be sure, but more importantly you seemed to make a generic statement that anyone making a choice like that would be making the right choice, and that's what I really object to.
> You’re the one bringing the extra opinions
I am not! Please stop misreading me! Why won't you listen to what I'm saying about my own argument?
> Neither of us can know 100% what was right or wrong for them in that moment,
Please explain how "neither of us can know 100%" can be true at the same time as "only true logical conclusion was they made the right choice for them at that moment. Full Stop."
> A, B
Remember that not reporting the company was also part of the choice they made. The basic description of the choice was to report, quit, both, or neither, and they chose to quit.
> If they changed their mind later, it does not change the immutable facts of that moment. It simply provides a new set of choices and options that is outside the scope of my original comment.
I'm not talking about whether someone might change their mind later with new information, per se. I'm making the objectively true claim that people don't always think things through, meaning their choice might fail to represent the knowledge and priorities they had at the time.
qup
a day ago
I wonder if I was part of the database that got emailed.
arminiusreturns
20 hours ago
Very unlikely, this is a very small operation with a tiny customer base.
mikeodds
a day ago
As in.. his actual Uber driver? He just handed his laptop over?
arminiusreturns
a day ago
Yes. The owner is old, and going blind, but refuses to sell or hand over day to day ops to someone else, and thus must ask for help on almost everything. I even pulled on my network to find a big processor with a good reputation to buy the company, but after constant delays and excuses for not engaging with them, I realized to the owner the business is both their "baby" and their social life, neither of which they want to lose.
trinsic2
18 hours ago
Sounds like a bunch of crap the industry is already trying to sell the public and no its not working and yes we can do with out it.
alsetmusic
a day ago
Regulation is key, but I don’t see it as likely when our society is poisoned by culture war bs. Once we put that behind us (currently unlikely), we can pass sane laws reigning in huge corporations.
2OEH8eoCRo0
a day ago
I get a feeling that liability is the missing piece in a lot of these issues. Section 230? Liability. Protection of personal data? Liability. Minors viewing porn? Liability.
Lack of liability is screwing up the incentive structure.
brookst
a day ago
I think I agree, but people will have very different views on where liability should fall, and whether there is a malicious / negligent / no-fault model?
Section 230? Is it the platform or the originating user that's liable?
Protection of personal data? Is there a standard of care beyond which liability lapses (e.g. a nation state supply chain attack exfiltrates encrypted data and keys are broken due to novel quantum attack)?
Minors viewing porn? Is it the parents, the ISP, the distributor, or the creator that's liable?
I'm not here to argue specific answers, just saying that everyone will agree liability would fix this, and few will agree on who should be liable for what.
TheOtherHobbes
a day ago
It's not a solvable problem. Like most tech problems it's political, not technical. There is no way to balance the competing demands of privacy, security, legality, and corporate overreach.
It might be solvable with some kind of ID escrow, where an independent international agency managed ID as a not-for-profit service. Users would have a unique biometrically-tagged ID, ID confirmation would be handled by the agency, ID and user behaviour tracking would be disallowed by default and only allowed under strictly monitored conditions, and law enforcement requests would go through strict vetting.
It's not hard to see why that will never happen in today's world.
malfist
a day ago
> It's not a solvable problem
Lawnmower manufacturers said the same thing about making safe lawnmowers. Until government regulations forced them to
brookst
4 hours ago
Well, perpetual motion is also considered unsolvable. Perhaps the right regulation would make it happen?
Or... maybe that line of reasoning isn't super strong.
malfist
3 hours ago
There's a big difference between breaking the physical laws of the universe and Big Tech not wanting to spend money on moderators.
ToucanLoucan
21 hours ago
https://i.imgur.com/mXU28ta.jpeg
Specifically, 1970.
Aerroon
18 hours ago
Well, something to consider is that part of why everything is so much expensive these days is that a lot of the solutions to those problems add costs. That cost needs to be absorbed by the price.
One of the reasons it's so expensive to build a house is safety regulations. They exist for a reason, but they nevertheless add a substantial cost to building a house. If you had mandated such a cost to people living in 1870 then a lot fewer people could've afforded a house.
malfist
3 hours ago
Building codes are written in blood.
ToucanLoucan
3 hours ago
In fact you can say that about most regulations of any sort, this seems to hit people's brains in the same way as how anti-vaxxers are so cavalier about things like Measles because we effectively eradicated it via vaccination (except it's now making a comeback thanks to anti-vaxxers) and people have forgotten how horrible it is, in the same way people forget, sure, houses used to be cheaper to build. And they also burned down a hell of a lot more, or were constructed poorly and not to code, or were built too close together, etc. etc. etc.
Just about every regulation that exists for every product or thing or way of doing something was written not even after a death, that's usually not enough, it needs to be a substantial number of deaths that can be directly linked to the thing the regulation covers and only then does it become law.
StanislavPetrov
a day ago
>Protection of personal data? Is there a standard of care beyond which liability lapses (e.g. a nation state supply chain attack exfiltrates encrypted data and keys are broken due to novel quantum attack)?
There absolutely should be, especially for personal data collected and stored without the express written consent of those being surveilled. They should have to get people to sign off on the risks of having their personal data collected and stored, be legally prevented from collecting and storing the personal data of people who haven't consented and/or be liable for any leaking or unlawful sharing/selling of this data.
zeroonetwothree
a day ago
If you aren’t directly harmed yet what liability would they have? I imagine if your identity is stolen and it can be tied to a breach then they would already be liable.
kibwen
a day ago
The fact that my data can be stolen in the first place is already outrageous, because I neither consented to allowing these companies to have my data, nor benefit from them having my data.
It's like if you go to an AirBNB and the owner sneaks in at night and takes photos of you sleeping naked and keeps those photos in a folder on his bookshelf. Would you be okay with that? If you're not directly harmed, what liability would they have?
Personal data should be radioactive. Any company retaining it better have a damn good reason, and if not then their company should be burned to the ground and the owners clapped in irons. And before anyone asks, "personalized advertisements" is not a good reason.
ryandrake
a day ago
That's the big problem with relying on tort law to curb this kind of bad corporate behavior: The plaintiff has to show actual injury or harm. This kind of bad behavior should be criminal, and the state should be going after companies.
JumpCrisscross
a day ago
> before anyone asks, "personalized advertisements" is not a good reason
The good reason is growth. Our AI sector is based on, in large part, the fruits of these data. Maybe it's all baloney, I don't know. But those are jobs, investment and taxes that e.g. Europe has skipped out on that America and China are capitalising on.
My point, by the way, isn't pro surveillance. I enjoy my privacy. But blanket labelling personal data as radioactive doesn't seem to have any benefit to it outside emotional comfort. Instead, we need to do a better job of specifying which data are harmful to accumulate and why. SSNs are obviously not an issue. Data that can be used to target e.g. election misinformation are.
thfuran
20 hours ago
So you're saying it's all vastly valuable and that's why it is right that it is taken without consent or compensation?
JumpCrisscross
19 hours ago
> it's all vastly valuable and that's why it is right that it is taken without consent or compensation?
No, I'm saying it's a common with a benefit to utilisation. A lot of discussions around data involve zealouts on both sides. (One claiming it's the god-given right to harvest everyone's personal information. The other acting like it's the crime of the century for their email address to be leaked.)
rockskon
11 hours ago
See - your problem is you think you're talking to politicians, Facebook-era journalists, disinfo activists.
Most people here have thought more about the the topic of privacy in the modern era far more than what some 70 year old politician has.
pc86
a day ago
I mean it's pretty clear that you are directly harmed if someone takes naked photos of you without your knowledge or consent and then keeps them. It's not a good analogy so if we want to convince people like the GP of the points you're making, you need to make a good case because that is not how the law is currently structured. "I don't like ads" is not a good reason, and comments like this that are seething with rage and hyperbole don't convince anyone of anything.
drawkward
a day ago
What is the harm? It is not obvious to me, if the victim is unaware...unless you are alleging simply that there is some ill-defined right to privacy. But if that is so, why does it apply to my crotch and not my personal data?
simoncion
a day ago
These are exactly my questions. If I never, ever know about those pictures and never, ever have my life affected by those pictures, what is the actual harm to me?
If the answer to them ends up being "Well, it's illegal to take non-consensual nudie pictures.", then my follow-up question is "So, why isn't the failure to protect my personal information also illegal?".
To be perfectly clear, I do believe that the scenario kibwen describes SHOULD be illegal. But I ALSO believe that it should be SUPER illegal for a company to fail to secure data that it has on me. Regardless of whether they are retaining that information because there is literally no way they could provide me with the service I'm paying them for without it, or if they're only retaining that information in the hopes of making a few pennies off of it by selling it to data brokers or whoever, they should have a VERY SERIOUS legal obligation to keep that information safe and secure.
lcnPylGDnU4H9OF
a day ago
> to fail to secure data that it has on me
Just want to point out that the company is usually also doing what it can to get other information about you without your consent based on other information it has about you. It's a lot closer to the "taking non-consensual nudie pictures" than "fail to secure data" makes it sound.
JumpCrisscross
a day ago
> it's pretty clear that you are directly harmed if someone takes naked photos of you without your knowledge or consent and then keeps them
Sure. In those cases, there are damages and that creates liability. I'm not sure what damages I've ever faced from any leak of e.g. my SSN.
pc86
6 hours ago
The real kicker is trying to prove which leak your SSN came from. If your SSN gets leaked by 3 different companies, and 6 months later someone uses your identity to commit some crime, you can't have each company share 1/3 of the blame.
pixl97
a day ago
I mean most people won't until that day they find out theirs a house in Idaho under their name (and yes I've seen just this happen).
The problem here is because of all these little data leaks you as an individual now bear a cost ensuring that others out there are not using your identity and if it happens you have to clean up the mess by pleading it wasn't you in the first place.
ranger_danger
a day ago
>I neither consented to allowing these companies to have my data, nor benefit from them having my data.
I think both of those are debatable.
lesuorac
a day ago
I don't think thats a proper parallel.
I think a better example would be You (AirBnB Host) rent a house to Person and Person loses the house key. Later on (perhaps many years later), You are robbed. Does Person have liability for the robbery?
Of course it also gets really muddy because you'll have renting the house out for those years and during that time many people will have lost keys. So does liability get divided? Is it the most recent lost key?
Personally, I think it should just be some statutory damages of probably a very small amount per piece of data.
polygamous_bat
a day ago
> I think a better example would be You (AirBnB Host) rent a house to Person and Person loses the house key.
This is not a direct analogue, a closer analogy would be when the guest creates a copy of the key (why?) without my direct consent (signing a 2138 page "user agreement" doesn't count) and at some later point when I am no longer renting to them, loses the key.
lesuorac
a day ago
I'm still much more interested in the answer to who is liable for the robbery.
Just the Robber? Or are any of the key-copiers (instead of losers w/e) also?
Dylan16807
16 hours ago
I don't really care about the answer to that specific question, where there's only one household.
What I will say is the guy that has copies of 20000 people's keys should get in trouble if he loses his horde.
pixl97
a day ago
The particular problem comes in because the amount of data lost tends to be massive when these breaches occur.
It's kind of like the idea of robbing a minute from someone's life. It's not every much to an individual, but across large populations it's a massive theft.
lesuorac
a day ago
Sure and if you pay a statutory fine times 10 million then it becomes a big deal and therefore companies would be incentivized to protect it better the larger they get.
Right now they probably get some near free rate to offer you credit monitoring and dgaf.
8note
a day ago
This version loses multiple parts of things that are important
1. I have no control over what was stored 2. I have no control over where the storage is
The liability in this case is the homeowner/host, as you should have and had full ability to change out the locks.
To make it more similar, I think you'd need one of the guests to have taken some amount of art off the wall, and brought it to a storage unit, and then the art later was stolen from the storage unit, and you don't have access to the storage unit.
It's not as good as the naked pictures example because what's been taken is copies of something sensitive, not the whole thing
mistrial9
4 hours ago
> You (AirBnB Host) rent a house to Person
this is outrageously incorrect analogy.. you ASSUME property ownership in the first statement. Where are personal legal records analogous to owned property? by whom?
drawkward
a day ago
Go ahead, post your phone number here. It's not directly harmful.
blondelegs
20 hours ago
1-800-call-FEDS
drawkward
18 hours ago
Bahahaha :)
halJordan
a day ago
This is the traditional way of thinking, and a good question, but it is not the only way.
An able bodied person can fully make complaints against any business that fails their Americans with Disabilities Act obligation. In fact these complaints by able bodied well-doers is the de facto enforcement mechanism even though these people can never suffer damage from that failure.
The answer is simply to legislate the liability into existence.
idle_zealot
a day ago
That's the whole problem with "liability", isn't it? If the harms you do are diffuse enough then nobody can sue you!
squeaky-clean
a day ago
The same way you can get ticketed for speeding in your car despite not actually hitting anyone or anything.
drawkward
a day ago
Surveillance apologist.
bunderbunder
a day ago
This is exactly why thinking of it in terms of individual cases of actual harm, as Americans have been conditioned to do by default, is precisely the wrong way to think about it. We're all familiar with the phrase "an ounce of prevention is worth a pound of cure", right?
It's better to to think of it in terms of prevention. This fits into a category of things where we know they create a disproportionate risk of harm, and we therefore decide that the behavior just shouldn't be allowed in the first place. This is why there are building codes that don't allow certain ways of doing the plumbing that tend to lead to increased risk of raw sewage flowing into living spaces. The point isn't to punish people for getting poop water all over someone's nice clean carpet; the point is to keep the poop water from soaking the carpet in the first place.
supertrope
a day ago
Safety rules are written in blood. After a disaster there’s a push to regulate. After enough years we only see the costs of the rules and not the prevented injuries and damage. The safety regulations are then considered annoying and burdensome to businesses. Rules are repealed or left unenforced. There is another disaster…
bunderbunder
a day ago
Tangentially, there was an internet kerfuffle about someone getting in trouble for having flower planters hanging out the window of their Manhattan high rise apartment a while back, and people's responses really struck me.
People from less dense areas generally saw this as draconian nanny state absurdity. People who had spent time living in dense urban areas with high rise residential buildings, on the other hand, were more likely to think, "Yeah, duh, this rule makes perfect sense."
Similarly, I've noticed that my fellow data scientists are MUCH less likely to have social media accounts. I'd like to think it's because we are more likely to understand the kinds of harm that are possible with this kind of data collection, and just how irreparable that harm can be.
Perhaps Americans are less likely to support Europe-style privacy rules than Europeans are because Americans are less likely than Europeans to know people who saw first-hand some of what was happening in Europe in the 20th century.