Monday, October 25, 2021

Debugging Origin Headers that Cause CloudFlare (or CDNs) to not Cache

We recently enabled CloudFlare CDN on our domain and noticed that resources always show as EXPIRED or MISS in the response header. Basically this means that the CDN flagged the origin resource as expired and fetched an updated copy from the origin server and served it. Basically, cache is completely being bypassed by CDN.

We posted a support entry the the CloudFlare community and they provided some insight on how to fix this:

Basically this was the issue:

- Our origin server was a node.js server serving our websites and assets (images, css, js etc)

- We deployed our app into a Dokku container running on a server. Dokku is basically an open source Heroku. Dokku serves the app via an NGINX proxy and our domain.

- We then put CloudFlare in front of that. 

- When hitting the website via the browser, CloudFlare shows all resources as EXPIRED or MISS but the Cache-Control header in the browser debugger seemed accurate (cache-control: public, max-age=14400)

- Digging deeper into it, it seemed that the origin server (the node.js app) was sending Cache-Control as "Cache-Control: public, max-age=0". CloudFlare then honoured that and EXPIRED the resource and appended the "cache-control: public, max-age=14400" from CDN to Browser. But the real benefit of caching on the CDN was missed.

So, how do you fix it?

1) Find out the origin header but running this command on your terminal. Replace values as needed (including [your origin server ip] which should be your server IP)

curl -k --dump-header - -o /dev/null -H "origin:" --connect-to ::[your origin server ip]

2) You will see something like this which shows the origin response headers. Which will show you the issue. 

HTTP/1.1 200 OK Server: nginx Content-Type: image/jpeg Content-Length: 41326 Connection: keep-alive X-Powered-By: Express Accept-Ranges: bytes Cache-Control: public, max-age=0 Last-Modified: Mon, 31 Aug 2020 03:18:52 GMT ETag: W/"a16e-17442864d60"

3) Finally, set the correct cache-control header in your node/express server. Once you do this - CloudFlare will pick it up and honour the cache on the CDN.

Happy coding!

Tuesday, September 1, 2020

Retiring Old Docker Containers in Dokku Environment

I love Dokku and I've been using it more more than 6 years to host production applications. It really steamlines the deployment of new releases and it is such a handy platform for developers to move into serious app release cycles.

If there is one pain point I have had over the years by using Dokku; it will have to be the "Docker baggage" the runtime leaves behinds over time. I have had situations where my servers run out of memory or disk space, and upon digging deeper into the issue I discover that it's caused due to issues like old Docker containers (Dokku is basically an abstraction layer over Docker) that keep running even after a new version of your app launches into a new Docker container or where the (sometimes) retired Docker Images still being left behind consum a lot of disk space.

I've seen numerous issues on the Dokku Github page like this - Old commit's containers do not ever shut down - which have been long closed as resolved, but I've always seen the issues continue.

I've blogged previously about how to cleanup Docker images here Dokku app deployment fails with a “main: command not found. access denied”. Its most likely a storage space issue... but in this post I will walk you through a "deep clean" I did on my live application - which is powered by 3 Dokku apps and a MondoDB container running on a server.

The following is highly risky as you have to deal with docker directly but if you know what you are doing then it should be OK. (but do these at your own RISK!)

Step 1: Start with a benchmark of memory and disk space to see how much you are actually saving.

Memory: run this command in the terminal:

free -m 

Diskspace: run the command:

df -h 

in the terminal. /dev/disk... is you hard-disk, but you will also see the junk Docker containers here listed which hopefully we can clean up