Reducing response time with server side caching

Reducing response time with server side caching

Server side caching means that some form of caching is done server side. And from a pagespeed perspective, it makes most sense to do this for contents that tends to be dynamic. Let's dive into the basics of server side caching a bit more.

Unlike most webpage contents, images, stylesheets and JavaScript often are static files: once created, contents won't change. And if they do, they are often given a new filename to prevent them from being cached by browsers.

Static versus dynamic contents

As a result, contents of images, stylesheets and JavaScript files often are unique per filename which is why these are considered static files.

Dynamic webpages

Webpages on the other hand often tend to be dynamic. This makes sense for webshops where prices, products and maybe reviews will change on a daily basis. Or maybe even on an hourly basis if you want to show stock information as well.

The same applies to most non-ecommerce webpages too. For example publisher websites showing latest articles on the homepage.

Retaining URL and its SEO ranking

Unlike filenames of stylesheets and images, you don't want to rename a blogpost or product detail page each time its price changes, someone added an item to the navigation or because a typo was fixed.

If you would change the URL, bookmarked pages as well as shared URL's wouldn't work anymore and redirecting them to a newly given URL would massively screw up SEO ranking.

Platform making webpages' TTFB slow

So, in general, when a server receives a request by a browser, the task of generating (new) HTML is handed over to the platform in play:

  • This could be WordPress in case of a website;
  • or for example Magento in case of a webshop.

Reduce server response time is misleading

And a fast server as well as server configuration sure helps. But despite most articles talking about reduce server response time, the server itself often isn't the biggest bottleneck.

It's just called "reduce server response time" as it's about work done on the server. And some automated test doesn't know or doesn't care about who's to blaim: server or platform. Spoiler-alert: Users actually don't care either, they just want it to be fast. So when seeing "reduce server response time" in an automated test, you now know it could actually be a bit misleading.

WordPress and Magento also are good examples of platforms with server side challenges. Just because of the amount of server side work they are doing as well as plugins being used. Although users won't know, these platforms will do a lot of work behind the scenes. For example fetching contents or build different components such as:

  • navigation;
  • article or product listing;
  • and maybe showing reviews or replies.

This involves all kind of lookup work, often in the form of database queries or API calls. Or even both. And maybe embedded images are being optimized on the fly as well.

Slow server response time will impact TTFB metric

So, users won't know what is happening server side. But they will notice the end result. Users will be looking at a white screen a bit longer, before seeing contents being rendered by a browser. This moment is captured by the FP or FCP metric.

TTFB versus FCP

But it's actually the TTFB metric that is directly impacted by "slow server response times". However, the average user won't be a web developer, so they won't know when TTFB happened and when FCP happened. That's why it is important to also look at render blocking resources.

You should reduce your server response time under 200ms

developers.google.com

But as a slow TTFB will directly push back the FCP metric as well, TTFB already is an important metric to improve if monitoring tells you it's below par. Google wants you to aim for a TTFB below 200ms.

How to improve server response times

Knowing that pages are more dynamic than images or stylesheets, it makes sense that reducing the response time of webpages requires a different strategy. And server side caching just is one of the strategies to improve server response times. It could even be a quick fix. But this depends on your platform and other services you're using.

Basics of server side caching

The goal of server side caching is to have a copy of a page available on the server. This will prevent the need of running the platform and all its (PHP?) files to do all the query and page building work each time a pagehit is being received. And such optimization actually leads to a double win:

  • reduced server response time;
  • also enabling the server to deal with more simultaneous requests.

Obviously, when some parts of the webpage needs to be updated, such as price or navigation contents, then the cache can't be used. It needs to be removed to prevent visitors from seeing old contents. Deleting the cache is called 'cache invalidation' or 'purge cache'.

But as a result, the very first visitor then won't benefit from server side caching, as no copy is available. That's when the platform has to deal with the request and assemble the HTML. This will then be saved by the server to become a new copy for other visitors, or just page requests in general.

Server side caching exceptions

There are situations where you don't want to return a copy to browsers, despite a requested URL being the same. For example when someone is submitting a form, as the submitted details should be processed. And maybe the response should then show task specific information, such as a confirmation message with checkout details or a personalized thank you page.

Luckily, the web already agreed on using different methods for such tasks. Such tasks are done using POST methods, while general page requests are part of GET requests (there are way more HTTP request methods). And any caching mechanism knows not to serve cached responses when dealing with POST requests in order to keep responses to dynamic requests dynamic themselves as well.

The same often applies to on-site search result pages to show contents tailored to the search query. Or auto-filling input fields based on requests from a newsletter. This is often done in combination with unsubscribing-hyperlinks, as only GET requests can be made from within a newsletter. In this case, a query string is added to the URL. For example:

<a href="https://www.erwinhofman.com/unsubscribe.html?emailaddress=info op erwinhofman punt com">unsubscribe from newsletter</a>

Anything after the questionmark is part of the query string that should lead to a personalized webpage. But this wouldn't work when the server would then return a copy. If you would implement server side caching without giving it much though, such page could then return some else's e-mail address by accident. This would then even lead to a data breach as well. So, you want to exclude these from your server side caching strategy.

On the other hand, there are common query strings that won't change HTML contents. Such social media, campaign or ads related query strings could then be impacting your TTFB as well. And that's a shame as those are only used by third parties to track user behaviour.

Who should do server side caching

You might now think that platforms will have server side caching baked in, as in the end it's the platform itself that knows which responses are allowed to be cached.

Plugin and server

Unfortunately, it often isn't baked in, and this is why caching mechanisms are popular. For example:

  • WP Cache when dealing with WordPress.
    It's then a plugin that will work on top of WordPress that will determine when to serve a copy, or let WordPress do the work.
  • Varnish, for example when dealing with Magento.
    Varnish is a HTTP accelerator that isn't platform specific. A more finetuned strategy might be needed depending on the platform you're using.

Do note that most pagespeed plugins won't always come with Core Web Vitals wins. Caching plugins often is an exception and maybe even a quick win though. It might even work on top of other plugins that would normally slow down server response times even more (such as plugins to merge static resources on the fly, although merging files might not always be a good idea, but that's another topic).

Content Delivery Network

And if that isn't helping, one could use a Content Delivery Network, such as Cloudflare to set up a caching strategy. Combining server side caching with a CDN's geographical coverage could even help a bit more when dealing with international audiences.

While a CDN is already doing the heavy lifting when it comes to server side caching, we can have more visitors benefitting from optimal TTFB by improving a CDN's cache hit ratio.