Follow

Stop Using Cloudflare

1. It is a GIANT man in the middle - MITM.
2. Their DDOS protection is not that good.
3. You are contributing to a centralized Internet.

Β· Β· Web Β· 21 Β· 206 Β· 166

@selea can you recommend an alternative? Preferably something from this list doc.traefik.io/traefik/https/a so I might actally have the energy to switch.

@selea For what ever reason I switched to traefik to handle HTTPS stuff and I really don't have the energy to switch to something else. Buuuut desec.io seems like at least better than cloudflare drop in replacement option.

@JonossaSeuraava

The Link you shared is a DNS-service and dont have anything to do with https

@selea No but yes. Letsencryp has dns challenges and at the moment traefik is dealing with those using cloudflares api. I think. I really don't actually know nor do I want to at the moment.

As I understand it in onder for me to get away from cloudflare dns service is exactly what I need in this situation.

@JonossaSeuraava@layer8 I think @selea meant CF in their role as CDN, not as DNS provider. That's two different parts of the puzzle. Then there's also CF the DNS resolver, which is yet another piece.

IMO, the CDN and DNS resolver are the problematic bits, since that's centralizing things into a black box for users.

@JonossaSeuraava I just run my own DNS server (PowerDNS) with which Traefik works just fine.

@selea I can't use some "important" sites, because strict privacy settings within Firefox blocks cloudflare.

Also some sites only work partial, because Google APIs are also blocked.

We need to get back to a decentralised web! Also people keeping up with central CDNs: Firefox now also added separated caches. They just don't work the way people still think.

@selea
I am using #CDN - not Cloudflare - for delivering images because my website is on a shared host and I think it makes my website load faster.

Do you think I should stop using it?

@panigrc

> I think it makes my website load faster.

You think? What happened to measuring?

Chances are it doesn't load faster at all (not that it matters unless you're in the millions of hits per day). This is because whatever their marketing says, the client still needs at least one more #DNS query before it can request your images.

If you're using a variety of CDNs as many do, it very quickly becomes a joke.

Just serve everything from the same domain.

@selea

@panigrc @selea

Remember also that they started this business back when dynamic content (#PHP, #ColdFusion [!], etc) were all the rage.

Nowadays anyone who knows what they're doing will be using static content + #API.

A normal, run of the mill #VPS can serve 2M+ page loads a day without breaking a sweat (personal experience), provided that you have a sensible setup.

@0 Not exactly... I use dynamic content on my website *because* I know what I'm doing...
APIs are fun n all, but they can be really fragile (thanks to JS being wonky at times).

Someone didn't enable JS or some of your JS doesn't get executed properly (eg. due to some client-side APIs missing)? Too bad, your site now doesn't load/display correctly.

Also, putting additional strain on the client for that shit is kinda bad imo...

@finlaydag33k

> Also, putting additional strain on the client for that shit is kinda bad imo...

On the contrary. It scales much better and distributing the load is the environmentally (and economically) friendly thing to do.

I use #NoScript like everyone else, but if I see a good reason for it I'll allow client-side #JavaScript for single origin sites.

@0
- Scales better: yes, no doubt about that.
- Environmentally friendlier: Depends on the infra, but in general, it's more environmentally friendly to _not_ have it on the clients (especially if said clients are mobile devices)

And with noscript lies a problem...
Because if you run it, I now need to convince you to enable it...
You may do so, but a random person will likely just go: "oh, site doesn't load, kbai"...

@finlaydag33k

Ah yes, the random person that runs #NoScript but doesn't know what it does. πŸ™„

And once again, what's the load on your sites?

There are two ways this can work:

* Your sites are one of the very rare kind that *need* not to rely on client-side processing to present dynamic (i.e., changes on every load) content. I'm struggling to find one such example.

* Your sites are very low traffic so it doesn't really matter one way or another.

@0 Well actually, I see quite a lot of goofballs running NoScript because they heard about it once...
It's like people that still run ABP instead of uBlock Origin...

The load on my websites is about 5k visitors a week, not much but still something.

The only real dynamic content that gets loaded up:
- blogposts (though I am looking into moving these to an API - since these _can_ take a while to load).
- The login status of people (eg. showing "login/register" when not logged in).

@finlaydag33k

> I see quite a lot of goofballs running #NoScript because they heard about it once

How do you see them? So-called #analytics?

And what do you gain by taking the load server-side? Do you have paying users that cover the running costs or are you just financing their browsing out of your own pocket?

Also, why do blog posts need to be dynamically generated as opposed to static content? You can't possibly be *that* prolific of a writer.

@0 No, I see them by literally being there :p
I work in IT and do go from customer to customer (albeit now a bit less due to the covid situation) and see a lot of em running NS.

What I gain by running the load server-side?
Less risk of breakage due to client-side issues.
Financing is done out of my own pocket (with support from donations) but the server runs here anyways so it doesn't cost me a penny more or less.

My blog posts are dynamically generated because of the shortcodes I use.

@finlaydag33k

Short codes?

> but the server runs here anyways so it doesn't cost me a penny more or less.

No, as I said, for a low traffic site it makes bugger all difference.

@0 Yes, Shortcodes.
Basically I type something like `[img src="<url>" caption="Yayeet"]` in my post and my website renders the HTML (adding a card with some content n shizz).

And there you have it, remember that I said: "I use dynamic content on my website *because* I know what I'm doing..." :p
I never said that static-only with an API doesn't have it's place, I said that I don't use it because the "savings" to not outweigh the "expenses".

Also less client-side stress == better UX overal.

@finlaydag33k

> [img src="<url>" caption="Yayeet"]

So a rudimentary domain specific language / expressions (#DSL)? But how does that introduce a requirement for the site to be dynamically generated? I gather that the output from that example is always constant? If so, what stops you from doing server side rendering instead? It's difficult to tell without knowing the exact nature of your site, but from the info given it seems far from clear that you have an optimal solution.

@0 Sort of yes, it uses a port of the WordPress shortcode "engine".

The output is 99% constant *however* it may be that the "view template" gets changed (eg. when I added captions to images, then later source references and even later give it a "loading" placeholder).
At which point, the post needs to be re-rendered.
Or if I change the post content at all.

And ofc, different params give slightly different outputs.
At most I can use a cache, but that's basically where it ends.

@finlaydag33k

In other words, there's zero need for dynamic page generation.

Honestly, as soon as someone says β€œtrust me, I know what I'm doing” alarm bells start ringing. 😲

@0 There is still a need, else I wouldn't have done it ;)

The need: I need to be able to change the rendered output and have it work 100% of the time.
The solution: I use dynamic pages.

Yes, I could use an API for it, but then we get back to the part where JS is iffy af.

I have thought about it for a long time and this is the best way to do it for my situation.

Again, I know what I'm doing.

@0

> You think? What happened to measuring?

Well you are right, I see no difference whatsoever.

I'll disable the CDN usage.

Thanks for reminding me πŸ˜…

@panigrc

No probs. As #Descartes said: β€œmetior, ergo sum.”

(Or he would have said if he'd been an #engineer, anyway)

@panigrc

Not really actually, using a CDN to deliver static content is way different to sending credentials via a third party

@selea 4. it now requires tracking cookies in order to serve your website at all

@samgai @selea not sure what the OP meant, but CF's DDoS protection is not really gratis b/c you exceed the threshhold of allowed traffic & CF demands payment. If you're going to pay, CF is probably not the best bang for buck. Someone has to pay for all the freeloaders.

@selea what would your suggestion be?

Genuine noob question.

@selea people act confused when I object to it, but it is simple as this, really

@selea 🍬 Which companies offer better protection rackets against DDoS attacks? 🍬

@selea I'm having enough trouble trying to convince people that Amazon and Google might not have their best interests at heart. I don't think I'll ever get around to red-pilling people on CloudFlare, unfortunately 😩

@jonn @selea How can they know your user agent if they don't decrypt the SSL?

@qorg11 @selea

I didn't care too much for cloudflare until I read this.

That's kind of huge though. I can't wrap my head around why is this the only architecture possible, but if someone would pitch me the idea of MITMing their users, I'd say that it will never fly.

Intuitively, it's just not a valid SaaS model. I'd say that they're better off selling their classifiers for active use inn their customers' load balancers.

But sadly the world is the way it is. :D

@jonn " can't wrap my head around why is this the only architecture possible"

Because what CF does it:
- receive request from client.
- decrypt request.
- checks whether it's a "malicious one" or not.
- forwards request to origin.

Due to the protocol works, they *need* to decrypt the traffic to check it's content and check whether it's legit traffic or malicious traffic.

@finlaydag33k well, sure, but they could just as well sell NGINX modules that do it the other way around with the same latency, together with an active queue solution for operation under DDoS conditions.

Of course, MITM'ing is easier, but I'm genuinely surprised people subscribe to this MO.

@jonn That'd also become an issue because now CloudFlare doesn't know where to forward the data to (the "host" header is also encrypted when using HTTPS).
So then, the only way would be to directly forward the traffic to the server...
Which then has to _still_ process the traffic (both legit and malicious) at which point, the entire purpose of CloudFlare would be nullified.

@finlaydag33k forwarding packets are orders of magnitude cheaper than processing and the amount of roundtrips would be the same.

The only problem really is that it's not drop-in and has upfront expense of processing packets.

@jonn Well, think of it like this:
With CloudFail:
- CF receives
- CF decrypts
- CF checks
- CF forwards
- You process request

The first 3 steps don't cost the origin any resource at all.

Now imagine running it as an Nginx module:
- You receive (either with or without forward from CF)
- You decrypt
- You send to CF
- CF checks
- CF sends results
- You check results
- You process request

Now you have a few additional steps that *do* cost you resources in order to know whether to continue.
1/2

@jonn So basically you already did some grunt work in order to know whether someone is malicious or not...
So the attacker still wastes your resources.
They need a bit _more_ in order to take you out but still can take you out the same way.

CloudFail tries to prevent this by filtering *before* forwarding so your origin can spend all of it's time processing legit requests.

2/2

@finlaydag33k but in the current arch after "- You process request" comes you forward back to CF, who relays to the client, making it exactly the same amount of roundtrips except an extra reencryption step for the server (with O(1) processing, say, "strip request body").

@jonn You forget that roundtrips isn't the issue here, it's the fact that the origin can still relatively easily be attacked.
If your origin has to do any processing on malicious requests, the purpose of cloudfail is nullified.

Sign in to participate in the conversation
Linux.Pizza

A instance dedicated - but not limited - to people with an interest in the GNU+Linux ecosystem and/or general tech. Sysadmins to enthusiasts, creators to movielovers - Welcome!