uses the Apache server. It failed, here’s why.

Update: headers updated.

OK, so I was curious and I looked into the headlines of and, to my surprise, powered by it, mistakenly said ‘Apache. However, as explained below, Apache was configured incorrectly.

Here are the main headers of

HTTP/1.1 200 OK
Server: Apache
Accept-Ranges: bytes
Content-Type: text/html
Access-Control-Allow-Origin: *
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 12213
Connection: keep-alive

…the only good news here is that gzip is being used.

The next problem, also indicated by the header, is no caching setup for static files. For example, not all of these files (and many others) have browser caching headers:

…this means that each of those files is repeatedly served by Apache every time a user visits a page, refreshes, etc. When a website lacks caching it becomes a significant problem when there are programming problems and site errors. This means that when users try to refresh and revisit pages again and again due to website errors, the load on the server increases a lot! For example, retrying a “single” page by 1 million visitors only 2 to 3 times will result in 2 to 3 million requests for “each” static file! Setting the cache TTL of 1 hour could have reduced the load on Apache Enough,

they are using and Akamai for CDN. That’s fine, but not enough assets were being served by these CDNs. Most are not served through CDNs and headers are not caching.

If the basics (unreasonable demands on Apache, statics, JavaScript, CSS and no caching headers for images) are covered, it gives you an idea of ​​the level of expertise it should have been lacking.

Also read: Benchmark of Nginx vs Apache.

Leave a Comment