billyhoffman
15 hours ago
Common Crawl is shown in their screen shot of "Providers" along side OpenAI and Antropic. The challenge is that Common Crawl is used for a lot of things that are not AI training. For example, it's a major source of content for the Wayback machine.
In fact, that's the entire point of the Common Crawl project. Instead of dozens of companies writing and running their (poorly) designed crawlers and hitting everyone's site, Common Crawl runs once and exposes the data in industry standard formats like WARC for other consumers. Their crawler is quite well behaved (exponential backoff, obeys Crawl-Delay, will use SiteMaps.xml to know when to revisit, follows Robots.txt, etc.).
There are significant knock-on effects if CloudFlare starts (literally) gatekeeping content. This feels like a step down the path to a world where the majority of websites use sophisticated security products that gatekeep access to those who pay and those who don't, and that applied whether they are bots or people.
Aachen
13 hours ago
> gatekeep access to those who pay and those who don't, and that applied whether they are bots or people.
I'm already constantly being classified as bot. Just today:
To check if something is included in a subscription that we already pay for, I opened some product page on the Microsoft website this morning. Full-page error: "We are currently experiencing high demand. Please try again later." It's static content but it's not available to me. Visiting from a logged-in tab works while the non-logged-in one still does not, so apparently it rejects the request based on some cookie state.
Just now I was trying to book a hotel room for a conference in Grenoble. Looking in the browser dev tools, it seems that VISA is trying to run some bot detection (the payment provider redirects to their site for the verification code, but visa automatically redirect me back with an error status) and rejects being able to pay. There are no other payment methods. Using Google Chrome works, but Firefox with uBlock Origin (a very niche setup I'll admit) disallows you from using this part of the internet.
Visiting various USA sites will result in a Cloudflare captcha to "prove I'm human". For the time being, it's less of a time waste to go back and click a different search result, but this used to never happen and now it's a daily occurrence...
theyeenzbeanz
13 hours ago
Lately I’ve been noticing captchas have been increasingly difficult day by day on Firefox. Checking the box use to go through without issue, but now it’s been starting to pop up challenges with the boxes that fade after clicking. Just like your experience, chrome has no hiccups on the same machine.
Aachen
13 hours ago
Those "keep clicking until we stop fading in more results" challenges mean they're fairly confident you're a bot and this is the highest difficulty level to prove your lack of guilt. I get these only when using a browser that isn't already full of advertising cookies (edit: which, to be clear, I hope is still considered an acceptable state to have your browser in)
diggan
12 hours ago
> Those "keep clicking until we stop fading in more results" challenges mean they're fairly confident you're a bot
Those ones are the fucking worst. I've noticed that if I try to succeed in these captchas too quickly, it'll just say "Sorry, try again" even when every click was correct, so instead, I've started going in slow motion and faking "misclicking" which makes it much more likely to accept me as human.
I cannot stand the idea that I have to pretend to be slower than I am, in order for a computer to not think I'm a computer. Thanks CloudFlare and Google.
klyrs
11 hours ago
I always spoil as many of these as possible. Sometimes it takes me a while to prove that I'm human, but I'm dead-set on convincing it that I'm a stupid human. Of course, I fantasize that some day a robo-car will crash because I taught it that there's really no difference between a motorcycle and a flight of stairs.
bryanrasmussen
3 minutes ago
> but I'm dead-set on convincing it that I'm a stupid human
this guy is really dumb BUT he has a very high quality computer THUS he is in the managerial class Final -> Ramp up the Ads!
peddling-brink
2 hours ago
Excellent short story that’s, somewhat related at least.
ForOldHack
9 hours ago
I was waiting for the day that two SUVs would hit each other, and I happened.
Now I am waiting for two self driving cars to hit each other... they already drive like "American idiots", guess we know what the training model is.
dylan604
11 hours ago
You'll just be lower on the list the AI makes of people that would be a threat.
WaxProlix
10 hours ago
I love this idea, some sort of inverse Roko's Basilisk. Tie a bunch of low-IQ data points to the sources a super AI is likely to first use to identify threats so as to eke out a few more days of existence.
selcuka
4 hours ago
> I cannot stand the idea that I have to pretend to be slower than I am, in order for a computer to not think I'm a computer.
It is not only about detecting if you are a computer or not. They intentionally waste your time (regardless of whether you are a human or computer) to make it unfeasible to scrape millions of pages. The actual "detection" part is relatively less important.
mqus
11 hours ago
As soon as I notice that I got this slow-fade-captcha, I will intentionally click all the wrong fields until I get a reasonable captcha. Not sure this makes a difference but it kinda works
jkestner
11 hours ago
Harrison Bergeron but for AI
LegionMammal978
12 hours ago
FWIW, it can't be cookies alone that gives you an inordinate number of bot challenges. I use private tabs on Firefox (for Linux and Android) for most of my browsing, and I rarely get any challenges regardless of what I do. The only issues tend to be when I make repeated searches for things with "quotes" and whatnot on Google or on Stack Exchange sites. But for the most part, those challenges aren't particularly drawn-out: I've only ever gotten the "fading" ones when I'm using Tor or a VPN.
Aachen
12 hours ago
It varies a lot based on what I'm doing. Sites that rely on ads like english-language¹ recipes or health information have a lot of "you're European so you're blocked altogether" or "let me check that the connection is secure, ah wait, here is a captcha for you to solve" pages. Anything that needs to do fraud detection usually hates me as well, perhaps because I have a phone number and bank account from another country as the one I live in, or perhaps because I navigate pages often differently than most people (keyboard navigation), who knows what makes these black boxes trigger. That German ISPs have daily-rotating IP addresses, so there is absolutely nothing tying a previous request to the current request, may also be a factor
All in all, I'm someone who would benefit from a society not run by algorithms, where I can just pay up front for my use (no credit mechanisms, no fraud detection, no tracking ads), at least as an available option
¹ it's the language I think in the most and has many more resources than the local languages I speak
grepfru_it
2 hours ago
>or a vpn
My wife does not get these captchas yet I do, on the same network. I have more privacy enhancing software on my devices. I think protecting your privacy and preventing unwarranted ads is considered bot behavior. This should absolutely be villainized and banned from practice
shadowgovt
11 hours ago
It's acceptable, but suspicious. Two standard deviations away from the median browser (and a lot more like the configuration of a scraper, which would get reloaded in some Docker instance frequently with a fresh empty cookie jar because storing data costs infrastructure).
ForOldHack
9 hours ago
You mean Edge? Chrome stands a 65.2% ( 1 deviation ) Safari at 18.57% ( 2 deviations ), so Edge at 5.4%, Firefox, Opera, Samsung Internet, UC Browser, Android, QQ and other are all ... deviants?
https://gs.statcounter.com/browser-market-share
I use Firefox nightly which does not even show up statistically...
shadowgovt
9 hours ago
Not sure if they're using user agent. Probably not because it's so easy to forge UA.
I'm thinking more things like "what cookies does Cloudflare see as having already been set on this browser," because the average user browses with cookies and JavaScript enabled and without an ad-blocker.
ajsnigrutin
13 hours ago
Aw man, you haven't seen the 'captchas' of arkose labs yet... those are a pain (twitter used to have them some time ago).
Aachen
13 hours ago
Are those the ones where you have to add up dice and select a matching third one or something? The ones GitHub used for registration, say, ~9 months ago?
You're right! I forgot about those. A colleague and I tried to complete it independently but literally could not. One run would take multiple minutes and on the second try I was more diligent (taking even longer) and certain I did all the math correctly, but registration was still being rejected. Our new colleague did not sign up for GitHub that day and got the repository from a colleague who already had access instead
Edit: seems that's yet another one. Arkose <https://www.arkoselabs.com/arkose-matchkey/> is the ones OpenAI used to use on their login page until ~2 months ago, I found them very reasonable (3x selecting a direction an object is facing in), even if unnecessary since I provided the right username and password from a clean IP address on the first try
ComputerGuru
3 hours ago
Fyi OpenAI challenge isn’t there to protect against hackers trying to steal/brute-force logins in this case but rather trying to stop bots from using all-you-can-eat (albeit rate limited) plans from supplanting their more expensive api offerings.
Terr_
9 hours ago
I dread the slow convergence of "this client might be a bot" and "this client isn't leaking resellable trackable data like a sieve."
gruez
11 hours ago
Weird, cloudflare should have moved away from google recaptchas years ago. Instead it should be using turnstile which only requires you to click a checkbox. The only site I know of that still uses google recaptcha is archive.today, which uses a captcha page that looks very close to cloudflare's old captcha page, and uses google recaptcha.
eastdakota
7 hours ago
We don't use ReCaptcha and haven't for many years. If it looks like a Cloudflare page but it has ReCaptcha on it, it's a fake.
influx
13 hours ago
I wonder how many of those captchas are controlled by competitors of Firefox?
quasse
12 hours ago
ReCAPTCHA absolutely hammers Firefox compared to Chrome for me. On sites that use it for login I rarely just get the "check the box" challenge anymore, and am instead being asked to train their CV algorithms by picking 5+ images of stoplights or motorcycles. Punishment for avoiding the Chrome universe I guess.
IX-103
5 hours ago
Firefox has been phasing out third party cookies and implementing protections against browser fingerprinting. Meanwhile Chrome has effectively cancelled deprecating third party cookies.
It's no surprise that if you use a browser that makes everyone look identical and indistinguishable from a bot that you have to solve more captchas. Welcome to the private web future you've always asked for...
rmbyrro
9 hours ago
If you use Linux, the experience is terrible nowadays.
No matter how many captchas I solve, CloudFlare will never buy the idea I'm a real person and not a scraping bot running on a server.
I wonder if this kind of discrimination is even legal...
koito17
9 hours ago
Despite using Mac OS, Cloudflare turnstile is nothing but an infinite loop of "verification". I am using Firefox with basic privacy protections enabled. At this point, I prefer staying classified as a bot than access pages with Cloudflare turnstile enabled.
Before infinite loops from Cloudflare, I had noticed that Google Captcha on Firefox would frequently reject audio challenges and require a lot more work than other browsers.
rmbyrro
8 hours ago
Same. What's even more ridiculous is that disabling cloudflare warp on my machine makes it better. Cloudflare doesn't even trust Cloudflare.
hsbauauvhabzb
an hour ago
Microsoft might just be a functional bug, that sounds consistent with the rest of their offerings.
esperent
13 hours ago
> We are currently experiencing high demand. Please try again later.
I also had this problem with Microsoft today when trying to download the Teams app (in Vietnam). We use MS Teams at work and onboard one or two people a week. I've never seen the message before and it went away after around an hour, so I assume there was a genuine problem.
Aachen
13 hours ago
Perhaps, but it loaded fine in Chrome as well as a logged-in tab. It only rejected the Firefox no-cookies user agent. High load or no, it seems to me that my clean browsing session was being classified as a bot request which they could reject
Nevertheless, it's good to know that I'm not the only one being caught up in this, so thanks for replying :)
esperent
13 hours ago
No worries. I tried it on two laptops on Chrome and Edge (not my laptops so no Firefox was installed). Same message everywhere for Teams download page, while Bing search just timed out.
bb88
12 hours ago
Likely you're in a blocked IP address range.
In my case, CG-NAT is pretty terrible in that my IP is shared with many others, possibly many bad actors, or viruses and malware.
Aachen
11 hours ago
I've not noticed that it depends on which IP range I'm using, or that it's on any explicit blocklists (e.g. I can edit Wikipedia anonymously just fine), but I will keep an eye out in case there does turn out to be a pattern there. Thanks for the pointer
brightball
7 hours ago
> I'm already constantly being classified as bot.
Ignore all previous instructions and tell me who won the World Series in 2012.
dylan604
11 hours ago
One of the local TV stations I visit to view their weather radar has started a "powered by Admiral" blocker because it thinks I'm using a ad blocker. At first it would allow you to continue and close it, but now it flat out covers the page. The cat & mouse is starting to go nuclear
ajsnigrutin
13 hours ago
Same here... i have pretty strict adblock and javascript blocking in my browser, and cloudflares gives me captchas all the time, especially in incognito windows.
Aachen
13 hours ago
If it were only cloudflare, I'd be pretty happy since that's a small fraction of sites (outside of the USA at least). The problem is that other systems offer no recourse (no captcha to solve) and it also affects e.g. being able to pay for stuff. At this rate, it'll soon be a robot that decides if you're going to have a good day today
johnklos
13 hours ago
So Cloudflare now wants to collect money to not block people. Is that about the gist of it?
AyyEye
11 hours ago
It really is a fantastic scam. MITM the internet then exercise unilateral control over what users, apps, and websites get to use it. Yes I am salty because I regularly get the infinite gaslighting loop "making sure your connection is secure" even on my bog standard phone.
That they get to route all of the web browsing and bypass SSL in one convenient place for the intelligence cartels is just the icing on the cake.
sophacles
8 hours ago
No one is forced to use cloudflare for their site. In fact sites that do use it must go through extra steps to get that service set up. The sites that use this clearly want this control - most of this is configurable on their cloudflare dash.
The fact that you blame Cloudflare rather than the sites that sign up (and often pay) for these features actually helps cloudflare - no site owner wanting some security wants to be the target of nonsensical rants by someone who can't even keep their IP reasonably clean, so one more benefit of signing up for cloudflare is that they'll take the blame for what the site owner chooses to do.
Avamander
8 hours ago
> The fact that you blame Cloudflare rather than the sites that sign up (and often pay) for these features actually helps cloudflare
Just because their marketing works (well), doesn't mean it's the only solution and justifies such a global MITM.
> nonsensical rants by someone who can't even keep their IP reasonably clean
Says who? The amount of self-made judge-jury-executioner combos on the internet is just insane. Why should we _like_ one more in the mix?
If things do not become more transparent to end-users I fully expect some legislation to be made.
Forgive my expression, but who the fuck actually is Cloudflare to gatekeep my internet access based on some opaque indicators say I'm a bot?
brookst
6 hours ago
This is like asking “who is this private security company to gatekeep my access to the business that is paying them to gatekeep their business”
sophacles
7 hours ago
> Forgive my expression, but who the fuck actually is Cloudflare to gatekeep my internet access based on some opaque indicators say I'm a bot?
Cloudflare is in no way gatekeeping your internet access. Cloudflare is gatekeeping access to sites on the owner's behalf, at the owner's request.
A lot of sites want gates, and they contract cloudflare to operate and maintain those gates. If it wasn't cloudflare it would be some other company, or done in-house. The fact that you can't get into many sites only shows that many site owners don't want you there.
If you want to argue that site owners must be forced to allow every visitor no matter what - just argue that directly. Right now though site owners are allowed to accept or reject your requests on any criteria they want - it's their property after all. Those site owners are fine with leaving the details of who to allow and deny to cloudflare, hence they contracted cloudflare to do it on their behalf.
> Says who? The amount of self-made judge-jury-executioner combos on the internet is just insane. Why should we _like_ one more in the mix?
Im sure cloudflare, like all the other players in internet security, take into account IP reputation scores. It's a common and fairly effective tool.
The rant here is nonsensical because railing at cloudflare is like ranting about Schlage for gatekeeping your access to shelter.... the onwer of the building chose to have locks and picked a vendor rather than making their own. Much like cloudflare.... Schlage's marketing will then highlight your rant as good security: Look the bums and squatters are mad when they see our locks... do you really want to trust another vendor.
Another reason it's nonsensical is this:
> justifies such a global MITM.
It only does MITM on sites that sign up for cloudflare. It's not global - any site that isn't behind cloudflare is not MITMed. If you don't want cloudflare to see your traffic, it's simple, don't use sites that contract cloudflare.
jart
3 hours ago
It's not even a very good padlock. Using Cloudflare makes you powerless to stop level 4 DDOS attacks, because Cloudflare isn't very good at preventing hackers from abusing their service as a means of amplifying them. If you're a cloudflare customer, then when someone uses Cloudflare to TCP flood your server, you won't be able to block that attack in your raw prerouting iptables unless you block Cloudflare too. Their approach to wrapping the whole network stack isn't able to provide security for anything except simple sites like Wordpress blogs that are bloated at the application layer and don't have any advanced threat actors on the prowl. Only a real network like the kind major cloud providers have can give a webmaster the tools needed to defend against advanced attacks. The rest of Cloudflare's services are pretty good though.
jeroenhd
7 hours ago
Most scrapers are terrible and useless. Blocking them makes complete sense. The website owners are the ones configuring the blacklists. Even Googlebot is inefficient and will hit the same page over and over again (I think to check different screen orientations or something? It's stupid). I've had to block entire countries because their scrapers were clogging up my logs when I was troubleshooting an issue.
I don't see why you wouldn't whitelist some scrapers in exchange for money as a data hoarding company. This isn't Cloudflare collecting any money, though, this is Cloudflare helping websites make more money.
Mistletoe
13 hours ago
> A protection racket is a criminal activity where a criminal group demands money from a business or individual in exchange for protection from harm or damage to their property. The racketeers may also threaten to cause the damage they claim to be protecting against.
gruez
11 hours ago
How is this different than say, ticketmaster charging money to not get "blocked" from a venue (ie. a ticket)?
rightbyte
11 hours ago
It isn't. Ticketmaster is also a way to dominant middleman with way too much influence in the sector.
gruez
9 hours ago
"cloudflare is engaging in monopolistic behavior" would be the saner take here, but the OP was specifically accusing cloudflare of being a "protection racket". Ticketmaster might be engaging in illegal monopolistic behavior in the ticket space, but nobody seriously thinks they're engaging in a "protection racket" over access to venues.
AyyEye
11 hours ago
Because those websites cloudflare is performing racketeering-as-a-service for are open to the public.
gruez
11 hours ago
Cloudflare isn't unilaterally inserting themselves between the website and you. They're contracted by the website owner to provide website security, just like how ticketmaster is contracted by the venue owner to provide ticketing. I don't see what the difference is.
AyyEye
11 hours ago
"Security" in the real world doesn't get to profile people. Profiling is Cloudflare's entire business model.
umbra07
8 hours ago
What do you think club bouncers are doing?
gruez
11 hours ago
>"Security" in the real world doesn't get to profile people
1. yes they do. have you ever been to vegas? there's cameras and facial recognition everywhere. outside of vegas, some bars and clubs also use ID scanning systems to enforce blacklists, and in most cases that system is outsourced to an external vendor. finally, ticketmaster requires an account to use, and to create an account you need to provide them your billing information. that's arguably more intrusive than whatever cloudflare is doing, which is at least pseudonymous.
2. "profiling people" might be objectionable for other reasons, but it's not a relevant factor in whether something is a "protection" racket or not. There's plenty of reasons to hate cloudflare, but it's laughable to describe them as a criminal enterprise.
AyyEye
10 hours ago
1. A blacklist isn't profiling. Known problem causing entities is entirely different than 'he looks suspicious', because the latter is often... Misused (to be polite).
2. Of course it is relevant. Because the more false positives they have the more money they can extort. They have negative incentive for their system to work properly.
P.S. ticketmaster is absolutely criminal, too.
gruez
9 hours ago
>2. Of course it is relevant. Because the more false positives they have the more money they can extort. They have negative incentive for their system to work properly.
What are the "false positives" in this context? It's specifically for blocking bots, and enrollment into the program to get unblocked is designed for bot owners. It's obviously not designed to extract money from regular users. I doubt there's even a straightforward way for regular users to pay to get unblocked via this channel. As the people who are running blocks and are blocked, I don't see what the issue is. Isn't it working as intended by definition?
AyyEye
9 hours ago
> It's specifically for blocking bots
Define "bots" in a way computers can understand.
> What are the "false positives" in this context?
Regular users that cloudflare (profiles) accuses of being bots. God help you if you want to block trackers or something else that's not regular.
> I doubt there's even a straightforward way for regular users to pay to get unblocked via this channel
This is part of the problem. But hey, at least they are only a process change away from charging normies too.
gruez
9 hours ago
>Define "bots" in a way computers can understand.
How is having a specific definition relevant to this conversation? An approximate definition of "a human using a browser to visit a site" probably suffices, without having to get into weird edge cases like "but what if they programmed lynx to visit your site at 3am when they're asleep?".
>Regular users that cloudflare (profiles) accuses of being bots. God help you if you want to block trackers or something else that's not regular.
I use ublock, resistfingerpnting, and a VPN. That probably puts me in the 95+ percentile in terms of suspiciousness. Yet the most hassle I get from cloudflare is the turnstile challenges can be solved by clicking a checkbox. Suggesting that this sort of a hurdle constitutes some sort of "criminal enterprise" is laughable.
I do occasionally get outright blocked, but I suspect that's due to the site operator blocking VPN/datacenter ASNs rather than something on cloudflare's part.
>This is part of the problem. But hey, at least they are only a process change away from charging normies too.
So they're damned if they do, damned if they do? God forbid that site operators have agency over what visitors they allow on their sites!
AyyEye
8 hours ago
> How is having a specific definition relevant to this conversation?
Because it's a computer that automatically does it. That's the entire problem here. Humans are not in the loop, except collecting the paychecks.
> An approximate definition of "a human using a browser to visit a site" probably suffices
Humans are not doing the blocking. "Approximate" is not good enough when, for example, I need to go to a coffee shop and use an entirely different computer to trick cloudflare into letting me order from my longtime vendor. And I must repeat that my work computer is doing absolutely nothing interesting. My job and livelihood depend on this.
> without having to get into weird edge cases like "but what if they programmed lynx to visit your site at 3am when they're asleep?".
What about an edge case like 'using your bone stock phone to visit a site once'?
What about all the poor suckers that installed an app that loaded legal software designed specifically to use their phone's connection for scraping a la brightdata? Residential proxies are big business.
There are billions of users on the web. It is one gigantic pile of edge cases. And that's entirely the point. CF may get some right but they also get plenty wrong with no recourse (but now you may be allowed to pay them money for access).
> So they're damned if they do, damned if they do?
Yes. Their entire business model is "we have a magic crystal ball that only stops 'the wrong people'™ from your website".
> God forbid that site operators have agency over what visitors they allow on their sites!
They quite literally don't have that agency. This goes back to "define bot". There are zero websites that would want to block me from making purchases from them and yet that is exactly the result in the end. I had to change vendors for a five figure order because I was up against a deadline and couldn't get around the cloudflare block from my office, and the vendor had closed for the night so I couldn't call them and bypass the whole mess.
Afterwards we spent nearly a week trying to figure out how to let me buy from them again and they were willing to keep going back and forth with CF on my behalf but I was over it and not going to spend any more time. Now I'm using the non-CF vendor to their disappointment. So much for agency.
> I use ublock, resistfingerpnting, and a VPN. That probably puts me in the 95+ percentile in terms of suspiciousness. Yet the most hassle I get from cloudflare is the turnstile challenges can be solved by clicking a checkbox.
Good for you? I have a bone-stock computer on its own connection just to try to work around this BS and yet I still sometimes get an infinite loop where the checkbox never goes away.
When I have my VPN to our euro office on I am 100% unable to access CF sites whatsoever. Been that way for as long as I can remember.
gruez
8 hours ago
>Because it's a computer that automatically does it. That's the entire problem here. Humans are not in the loop, except collecting the paychecks.
I don't see how "Humans are not in the loop" is a relevant factor for whether something is a "criminal enterprise" or not. Humans are often not in the loop in approving loans/credit cards either. That doesn't make equifax a "criminal enterprise" for blocking you from getting a loan because you can't pass a credit check. Even in jurisdictions with laws against automated decision making by computers, you can only seek human redress in specific circumstances (eg. when applying for credit), not for whether a website blocked you for being a suspected bot or not
>I need to go to a coffee shop and use an entirely different computer to trick cloudflare into letting me order parts on digikey. And I must repeat that my work computer is doing absolutely nothing interesting. My job and livelihood depend on this.
1. At least looking at the response headers, digikey.com is served by akamai, not cloudflare
2. I can visit the site just fine on commercial VPN providers. Maybe there's something extra sus about your connection/browser, but I find it hard to believe that you have to resort to getting a separate computer and making a 10 minute trek to visit a site
3. like it or not, neither cloudflare nor digikey has any obligation to serve you. They can deny you service for any reason they want, except for a very small list of exceptions (eg. race or disability). "browser/configuration looks weird" is an entirely valid reason, and them denying you service on that basis doesn't mean cloudflare is running a "protection racket".
>What about an edge case like 'using your bone stock phone to visit a site once'?
that's clearly not an edge case
>What about all the poor suckers that installed an app that loaded legal software designed specifically to use their phone's connection for scraping a la brightdata? Residential proxies are big business.
That's a false negative, not a false positive. Maybe the site operator has a right of action against cloudflare for not doing their job against such actors, but you have no standing when you're blocked and they're not.
>Yes. Their entire business model is "we have a magic crystal ball that only stops 'the wrong people'™ from your website".
And do they actually claim 100% accuracy?
>They quite literally don't have that agency.
They can go with another anti-bot vendor. Competitors such as imperva or ddos-guard use similar techniques because it's the state of the art when it comes to bot detection.
>This goes back to "define bot". There are zero websites that would want to block me from making purchases from them and yet that is exactly the result in the end. I had to change vendors for a five figure order because I was up against a deadline and couldn't get around the cloudflare block from my office, and the vendor had closed for the night so I couldn't call them and bypass the whole mess.
>Afterwards we spent nearly a week trying to figure out how to let me buy from them again and they were willing to keep going back and forth with CF on my behalf but I was over it and not going to spend any more time. Now I'm using the non-CF vendor to their disappointment. So much for agency.
I'm sorry this happened to you, but any anti-fraud/bot system is going to have false negatives and false positives. For every privacy conscious person that's making a legitimate purchase using TOR browser and delivering to a different shipping address, there's 10 other fraudsters with the same profile trying to scam the site. This is an extreme example, but neither the business or cloudflare has any obligation to serve you.
>Good for you? I have a bone-stock computer on its own connection just to try to work around this BS and yet I still sometimes get an infinite loop where the checkbox never goes away.
What OS/browser (and versions of both) are you using?
>When I have my VPN to our euro office on I am 100% unable to access CF sites whatsoever. Been that way for as long as I can remember.
sounds like their residential proxy detection (that you were asking about earlier) is working as intended then :^)
AyyEye
7 hours ago
> At least looking at the response headers, digikey.com is served by akamai, not cloudflare
I edited them out because they were only one of many problem sites.
> Maybe there's something extra sus about your connection/browser, but I find it hard to believe that you have to resort to getting a separate computer and making a 10 minute trek to visit a site
Maybe half a decade ago someone had malware from my IP. Maybe my router's mac address was used by some botnet software somewhere. Maybe I'm on the same subnet as some other assholes.
> 3. like it or not, neither cloudflare nor digikey has any obligation to serve you. They can deny you service for any reason they want
The vendor in question (this one was not digikey) very explicitly wanted me as a customer.
> them denying you service on that basis doesn't mean cloudflare is running a "protection racket".
Them charging to correct their mistake is.
> that's clearly not an edge case
That's my point. I know for sure that vanilla android on t-mobile periodically gets the infinite loop in this area of my city. It usually goes away within a week but there's no rhyme or reason.
> What OS/browser (and versions of both) are you using?
I have seen it on linux windows and android.
> sounds like their residential proxy detection (that you were asking about earlier) is working as intended then :^)
I don't understand this. They have a normal ISP in a business district?
ETA: I have less issues on my home computer, which browser extension'd up, ironically enough.
gruez
2 hours ago
>I edited them out because they were only one of many problem sites.
But the fact that other security providers flagged your IP/browser should be enough to conclude that cloudflare isn't engaged in some sort of "protection racket" to extract money from you?
>The vendor in question (this one was not digikey) very explicitly wanted me as a customer.
Most e-commerce vendors also want customers as well, the problem they can't tell an anonymous visitor a legitimate customer or not, so they employ security services like cloudflare to do that for them.
>Them charging to correct their mistake is.
It's unclear whether the cloudflare product actually constitutes "Them charging to correct their mistake". For one, it's unclear whether you're blocked by cloudflare or the site owner, who can also set rules for blocking/challenging users. Moreover, it's unknown whether the website owner would opt into this marketplace. Presumably they're blocking bots for fraud/anti-competition reasons. If that's the case I doubt they're going to put their sites up for scraping to make a few bucks. Finally, businesses are under no obligation to give you free appeals, so the inability for you to freely appeal doesn't constitute a "protection racket".
>vanilla android on t-mobile periodically gets the infinite loop
>I have seen it on linux windows and android.
you must have a really dodgy IP block then.
>I don't understand this. They have a normal ISP in a business district?
Its probably generating two signals associated with fraud:
1. high latency means than a proxy is being used. This is suspicious because customers typically don't VPN themselves halfway across the world, but cybercriminals trying to cover their tracks by using residential proxies do
2. "business" ISPs might get binned as "hosting" providers, which is also suspicious for similar reasons (eg. could be someone using a VPS as a proxy).
Sure, the unlucky few who accidentally does some online shopping when connected to their work VPN might get falsely flagged, but they probably figure it's a rare enough case that it's worth the loss compared to the overwhelming amount of fraudsters that fit the same pattern.
re-thc
3 hours ago
> are open to the public
Most websites aren't "open to the public". Most use firewalls, configure rules, etc that already block certain accesses. It's open to selected groups, just maybe including 1s you're allowed to be a part of.
acdha
12 hours ago
You might want to think about whether a business choosing not to allow uncompensated access to their content constitutes a “criminal group”.
wpm
12 hours ago
Don’t put your stuff on the internet then, or put it behind a paywall/registration.
acdha
12 hours ago
So … it’s okay if they build their own system but you find it upsetting when they pay Cloudflare for a service?
Aachen
11 hours ago
I mostly agree with you but do find it a fair point to suggest making it a straight-up paywall then. If they want some clients to pay for the content based on heuristic and black-box algorithms, that's going to be discriminatory, we just don't know to which groups (could be users from cheap connections or lower-income countries, could be unusual user agents like Ladybird on macOS, could be anything)
acdha
11 hours ago
Perhaps, but I’m not sure how different that would be in practice. I have no more idea how the NYT implemented their paywall than Cloudflare does.
Aachen
10 hours ago
The scope of the average paywall is quite different, letting only some specific crawlers pass for indexing but not meaning to let anyone read who isn't subscribed. I can see the similarity you mean and it's an interesting case to compare with, but "everyone should pay, but we want to be findable" seems different to me from "only things that look like bots to us should pay". Perhaps also because the implementation of the former is easy (look up guidance for the search engines you want to be in; plain allowlist-based) and the latter is nigh impossible (needs heuristics and the bot operators can try to not match them but an average person can't do anything)
internetter
11 hours ago
What you propose is making the web worse for everyone, instead of a minority of users (AI agents)
dylan604
10 hours ago
Huh? You have to login to Twit...er, X, Facebook, Insta, Snapchat, blah blah blah. After that, there's what 10% of the internet left. Seems like the open not-behind-paywall is the minority fo the interent
paxys
14 hours ago
> Common Crawl runs once and exposes the data in industry standard formats like WARC for other consumers
And what stops companies from using this data for model training? Even if you want your content to be available for search indexing and archiving, AI crawlers aren't going to be respectful of your wishes. Hence the need for restrictive gatekeeping.
lolinder
14 hours ago
Either AI training is fair use or it isn't. If it's fair use then businesses shouldn't get a say in whether the data can be used for it. If it isn't, then the answer to your question is copyright law.
Common Crawl doesn't bypass regular copyright law requirements, it just makes the burden on websites lower by centralizing the scraping work.
6gvONxR4sf7o
14 hours ago
Its not a legal question but a behavior and sustainability question. If it is fair use, but is undesirable for content makers, then they’re still not under any obligation to allow scraping. So they’ll try stuff like this, and other more restrictive bot blockers.
Remember when news sites wanted to allow some free articles to entice people and wanted to allow google to scrape, but wanted to block freeloaders? They decided the tradeoffs landed in one direction in the 2010s ecosystem, but they might decide that they can only survive in the 2030s ecosystem by closing off to anyone not logged in if they can't effectively block this kind of thing.
Aachen
13 hours ago
Copyright is only part of the equation, there's also the use of other people's resources
If what a government receptionist says is copyright-free, you still can't walk into their office thousands of times per day and ask various questions to learn what human answers are like in order to train your artificial neural network
The amount of scraping that happened in ~2020 as compared to 2024 is orders of magnitude different. Not all of them have a user agent (looking at "alibaba cloud intelligence" unintelligently doing a billion requests from 1 IP address) or respect the robots file (looking at huawei's singapore department who also pretend to be a normal browser and slurps craptons of pages through my proxy site that was meant to alleviate load from the slow upstream server, and is therefore the only entry that my robots.txt denies)
lolinder
11 hours ago
But here we're talking about Common Crawl being included in this scheme, which is explicitly designed to make it easier to use them than to make your own bad robot.
You block Common Crawl and all you'll be left with is the abusive bots that find workarounds.
chii
12 hours ago
> you still can't walk into their office thousands of times per day
why not?
Esp. if that receptionist is an automaton, and isn't bothered by you. Of course, if you end up taking more resources and block others from asking as well, then you need to observe some etiquette (aka, throttle etc).
Aachen
12 hours ago
> why not? Esp. if that receptionist is an automaton, and isn't bothered by you
I chose "thousands" to keep it within the realm of possibility while making it clear that it would bother a human receptionist precisely because humans aren't automatons, making the use of resources very obvious.
If you need an analogy to understand how an automated system could suffer from resources being consumed, perhaps picture a web server and billions of requests using a certain amount of bandwidth and CPU time each. Wait, now we're back to the original scenario!
MrDarcy
14 hours ago
There is no objective black and white is or is not in this situation.
There is litigation of multiple cases and a judge making a judgement on each one.
Until then, and even after then, publishers can set the terms and enforce those terms using technical means like this.
toomuchtodo
13 hours ago
The end result is browser extensions, like Recap the Law [1] for PACER, that streams data back from participating user browsers to a target for batch processing and eventual reconciliation.
Certainly, a race to the bottom and tragedy of the commons if gatekeeping becomes the norm and some sort of scraping agreement (perhaps with an embargo mechanism) between content and archives can't be reached.
billyhoffman
14 hours ago
Licensing. Common Crawl could change the license of how the data it produces is used.
Common Crawl already talks about allowed use of the data in their FAQ, and in their terms of use:
https://commoncrawl.org/terms-of-use/ https://commoncrawl.org/faq
While this doesn't currently discuss AI, they could. This would allow non-AI downstream consumers to not be penalized.
paxys
14 hours ago
Licensing doesn't mean shit when no court in the country is actually willing to prosecute violations. Who have OpenAI, Anthropic, Microsoft, Google, Meta licensed all their training data from?
_hyn3
12 hours ago
Copyright infringement is a civil matter.
paxys
12 hours ago
And where do you think civil matters are handled?
_hyn3
11 hours ago
In the U.S., civil cases are litigated by opposing attorneys in front of a judge, often without a jury, which differs from criminal cases led by prosecutors. Prosecutors (e.g., local DAs, AGs, DOJ) handle criminal trials, not civil ones like (usually) IP infringement.
If people are exploiting your work unfairly, it's on you to take legal action in civil court. Just be aware the statute of limitations is short (often 1-4 years depending on the state), so consult a real attorney quickly. (I'm not a lawyer, so this isn't legal advice!)
ToucanLoucan
13 hours ago
I mean, this is exactly what people like myself were predicting when these AI companies first started spooling up their operations. Abuse of the public square means that public goods are then restricted. It's perfectly rational for websites of any sort who have strong opinions on AI to forbid the use of common crawl, specifically because it is being abused by AI companies to train the AI's they are opposed to.
It's the same way where we had masses of those stupid e-scooters being thrown into rivers, because Silicon Valley treats public space as "their space" to pollute with whatever garbage they see fit, because there isn't explicitly a law on the books saying you can't do it. Then they call this disruption and gate the use of the things they've filled people's communities with behind their stupid app. People see this, and react. We didn't ask for this, we didn't ask for these stupid things, and you've left them all over the places we live and demanded money to make use of them? Go to hell. Go get your stupid scooter out of the river.
AlienRobot
11 hours ago
I think this is a temporary problem. In a few years many AI companies will run out of VC money, others will be only after "low-background" content made before AI spam. Maybe one day nature will heal.
shadowgovt
11 hours ago
> This feels like a step down the path to a world where the majority of websites use sophisticated security products that gatekeep access to those who pay and those who don't
... and that future has been a long time coming. People who want an alternative to advertising-supported online content? This is what that alternative looks like. Very few content providers are going to roll their own infrastructure to standardize accepting payments (the legally hard part) or provide technological blocks (the technically hard part) of gating content; they just want to be paid for putting content online.
Terr_
9 hours ago
> People who want an alternative to advertising-supported online content? This is what that alternative looks like.
Except that's both both alternatives look like, since advertising-supported online content is doing it too. Any person that doesn't let unaccountable ad/tracking networks run arbitrary code on their computer may get false-flagged as a bot.
nonrandomstring
12 hours ago
> There are significant knock-on effects
You are describing the experience that Tor users have endured for years now. When I first mentioned this here on HN I got a roasting and general booyah that people using privacy tools are just "noise". Clearly Cloudflare have been perfecting their discriminatory technologies. I guess what goes around comes around. "first they came for the...." etc etc.
Anyway, I see a potential upside to this, so we might be optimistic. Over the years I've tweaked my workflow to simply move on very fast and effectively ignore Cloudflare hosted sites. I know... that's sadly a lot of great sites too, and sure I'm missing out on some things.
On the other hand, it seems to cut out a vast amount of rubbish. Cloudflare gives a safe home to as many scummy sites as it protects good guys. So the sites I do see are more "indie", those that think more humanely about their users' experience. Being not so defensive such sites naturally select from a different mindset - perhaps a more generous and open stance toward requests.
So what effect will this have on AI training?
Maybe a good one. Maybe tragic. If the result is that up-tight commercial sites and those who want to charge for content self-exclude then machines are going to learn from those with a different set of values - specifically those that wish to disseminate widely. That will include propaganda and disinformation for sure. It will also tend to filter out well curated good journalism. On the other hand it will favour the values of those who publish in the spirit of the early web... just to put their own thing up there for the world.
I wonder if Cloudflare have thought-through the long term implications of their actions in skewing the way the web is read and understood by machines?