Shopware 6 as your ideal Core Web Vitals boilerplate?

Shopware 6 as your ideal Core Web Vitals boilerplate?

When it comes to Core Web Vitals, there isn't a one-size-fits. And the same actually applies to (Shopware) shops, as each will have some customized functionality. However, rule of thumb are the same when it comes to Core Web Vitals, but also the typical Shopware shop as they are sharing the same boilerplate.

With Core Web Vitals becoming a ranking factor in less than a month (mid-June, with gradual rollout), this topic is more current than ever. However, Google’s focus on pagespeed or rather page experiences (as the update is called, Core Web Vitals being part of this update) isn’t new.

To come up with quick wins towards this ranking update and moreover technical UX, I looked at a Shopware 6 shop: The audit and especially this post is in prepara­tion of the Shopware United round table session happening upcoming Thursday.

Register for the event

But let's start with a quick introduction of Core Web Vitals.

Core Web Vitals, why only now?

You might already know PageSpeed Insights. Google even introduced a new HTML framework some years ago, called Accelerated Mobile Pages. My advice for the latter: don’t use it by now and invest your time in optimizing the actual shop.

But this does prove a point: For some years now, Google has been trying to make pagespeed a topic of interest amongst different roles in everyone’s team: SEO & UX specialist, product owners and developers. Sure, metrics that Google is introducing via Lighthouse seems to be changing all the time, but only because they try to enable developers to get an improved sense of the impact of our code on (technical) UX.

Ok, but what are Core Web Vitals?

A set of just three metrics to capture the technical UX from different perspectives, so not only pagespeed. And Google will be using these for ranking.

An important nuance is that Google will be looking at user experience of real visitors. The visitors of your very own shop, obviously. This means when your audience is typically on an older device, low quality internet plan or internet connectivity, or they live in warmer countries, you might have a harder job to pass Core Web Vitals with the same platform.

Let's dive into the three real UX metrics:

First Input Delay (FID)

FID is giving you an idea of the responsiveness of the browser at the moment a user starts interacting with your website of shop for the first time (scrolling excluded, browsers are already offloading scrolling to another thread).

Google might start to track other interactions in the future as well. But for now it means you should limit the amount of JS, or just the amount of browser work in general.

Largest Contentful Paint (LCP)

Largest Contentful Paint is about the largest element within the viewport. It is only tracked before any user interaction took place. Why, you wonder? It would be a shame if a fancybox that got opened due to a button click would impact your score.

So, any large (and typically above-the-fold) text element such as a heading or para­graph, or any image element such as background images, video placeholder of hero and product images should load as soon as possible.

Cumulative Layout Shift (CLS)

Who likes shifting content? For example when you're just reading a bit of content while also Netflixing, or when you're in a more serious mood and trying to order products.

When elements shift, it might distract your users or they might end up clicking on an ad, instead of adding a product to their cart. You will leave them disoriented and they might leave the page depending on their state of mind.

Where is Shopware 6 standing?

Glad you’re asking! Shopware actually provides a very clean boilerplate. And with Shopware 6 being a relatively new product, you aren’t as likely to run into technical debt to the same degree as, let’s say Magento 2 (my personal finding).

One example is Shopware's CSS and JS approach in version 6: CSS first, JS second. All JS is actually deferred by default as the typical all.js is moved to the footer. This means:

  • better FCP and potentially LCP experiences by default;
  • Important elements usually is server side rendered, while Magento 2 typically uses client side rendering of Fotorama for their product images;
  • reduced chance of technical (JS) debt in the future.

As a result, most Shopware 6 shops I see (and sometimes -out of curiosity- test when I see a new shop being launched) are doing quite well in real life from the start.

Next to Shopware, do it yourself

For today's findings, I audited the Danish webshop, a webshop built in Shopware 6. And with their learnings, we can all do an even better job on top of the Shopware platform.

Proper server side/CDN caching

I won't elaborate on the use of CDN's. But when configured correctly, they could cache the HTML responses for next visitors. This will result in faster response times, improving one of the first important metrics: Time to First Byte (TTFB).

However, one aspect which is often overlooked is query strings. CDN's won't serve cached responses and will loop back to your server. It's quite likely your server will then run the Shopware platform and it's PHP files, generating a new response. You could see this as a chained delay and will drastically impact the TTFB. This happened the most for the homepage of We could see this coming as most Google and Facebook ads are leading to the homepage.

I visualized the difference between such requests in the histogram chart below:

Histogram by, with a comparison of requests without (grey bars, 'normal requests') and with query strings (coloured bars, coming from google or facebook ads, for example).

We can clearly see a spiked grey bar in the 0-300ms bucket, which is what we want. However, as soon as visitors are visiting pages with a query string, the buckets look quite different without a huge spike in the fastest bucket.

You obviously want to serve dynamic content when someone is doing an on-site or facet search. However, for most external query strings, the same response could be served as third party query strings isn't likely to result in other HTML contents.

When setting up your CDN, be sure to at least exclude the following query strings so that CDN's won't bypass the cache if a page specific cache actually is on the CDN already:

  • gclid (Google Click ID);
  • fbclid (Facebook Click ID);
  • msclkid (Microsoft Click ID);
  • all utm_* (Urchin-trackingmodule) para­meters.

This will only improve other pagespeed metrics such as FCP (not a Core Web Vitals metric at time of writing) and LCP.

Smaller HTML payloads

Although HTML source will be compressed, you might still be able to get some performance gains with just one line of PHP. Even when only speaking about HTML, some pages can grow quite large. For example due to some megamenu's or extensive facet search. I ran into a page with a size of 2,364 kilobytes (uncompressed). Doing the following resulted in a filesize reduction of 46.7%:

preg_replace('/s+/', ' ', str_replace( array("r","n"), null, $html) )

This might feel like nitpicking compared to other tips and tricks, but making your HTML payloads smaller will still make a difference, despite compression. This is even visualized in the following comparison, where the product listing page was 417.67% bigger due to facet search options being shown as well.

TTFB of different pages compared to overall 75th percentile TTFB, via

To come up with a proper comparison, only requests without query strings were taken into account to prevent skewed results due to other factors.

In the case of product listing pages of Quickparts, it would be a better approach to render the facet options client side (and on demand). In the end, initially they are invisible anyway. Doing so would reduce the filesize from 2,364kb to 543kb (both uncompressed). A reduction of 77% and even 89% after also removing whitespaces.

Additionally, the amount of initial DOM nodes were reduced from 10,269 to 3,020. This is important as it is not just JavaScript that can impact performance, but also too many HTML/DOM nodes. Lighthouse will advice you to stay below 1,500 DOM nodes to reduce style calculations and memory usage.

This will improve TTFB and potntially LCP and even FID.

Do note this should be tested first as removing whitespaces and line breaks could also remove them from filled in form fields, such as textarea's.

(Google) fonts and CSS import

Chances are you’re using custom fonts. If so, then probably Google Fonts. I typically see this happening in Shopware and Laravel shops: Google Fonts are embedded using the CSS @import construction.

However, this really is one of the bigger pagespeed bottlenecks. We can see this happening in the following waterfall:

Screenshot of a resource waterfall from, where only important resources are shown.

As @import is being used within all.css, the imported file only starts to download once all.css was fully received. Unfortunately, the browser also waits with rendering pixels to the screen, delaying the First Contentful Paint metric (lightgreen vertical line).

We should actually enable the browser to start the download of the Google Fonts CSS the same time that all.css starts downloading. In other words, para­llel download instead of introducing a chained request.

To help the browser a bit more and speed up the download of the font files being hosted on yet another domain, we should just use a direct link-element and add a preconnect to the origin hosting the actual fonts (

<link rel="preconnect" href="" crossorigin />
<link rel="stylesheet" href="" />

As FCP happens at 2234 milliseconds in this mobile test on 4G internet, we are already able to shave off 408ms as the download of the Google font will then take place at the same time as the download of all.css. Using a simple CSS lazyloading technique for the Google Font will even shave off a complete 767ms, bringing our FCP down to 1.467 ms. Sweet!

This will improve FCP , LCP and potentially CLS.

Google servers often are quite fast, but if your own stylesheet is faster to download or just want to prevent a single point of failure, you could actually lazyload the stylesheet. You could also self-host the fonts, as there is no shared caching anymore.

Speed up image download

Most sites, Quickparts included, are using a CDN for their images. But in your quest to pass Core Web Vitals, it isn't very convenient when an additional DNS lookup and SSL handshake is being introduced and the browser first has to work its way through other HTML before detecting your images.

As your image might be quite critical towards your LCP, you might speed up this proces and introduce yet another preconnect:

<link rel="preconnect" href="" />

This will improve LCP.

Lazyload product images

You obviously want images to be shown as soon as possible. It's very likely that they make up your LCP element, for example on your listing page or product detail page, a Bageplade in case of Quickparts. Or just the hero on your homepage. But they can easily become a pagespeed bottleneck.

Mobile Largest Contentful Paint (LCP) of homepage and product detail page on, according to Chrome DevTools.

Be sure to lazyload most images. Nowadays, you don't need JavaScript in most browsers. Not supported in Internet Explorer and Safari, so this actually is progressive enhancement: apply a loading=lazy attribute to all images, except the ones that matter, see to lazyload or not to lazyload above the fold images.

Which ones are these? Your logo, as it is part of your branding which you want to be visible as soon as possible. But also all images on the first row of your listing page, or just the first image in your gallery. All other product images should then be lazyloaded to reduce resource congestion.

Do note that to prevent layout shifts (CLS), you should actually preserve space for those images that aren't showing up right away. This could be achieved by setting width and height attributes for your images, or creating placeholders by using the image placeholder padding-top hack, which is very convenient when all images are displayed using the same dimensions.

This will improve LCP and prevent CLS at the same time.

Third parties

Quickparts is also using some third party libraries. Not a lot, and most of them are actually behind a cookie consent to be compliant with privacy regulations (GDPR). Two other third parties are loaded directly:

  • a chat widget (Crisp);
  • a site search (Doofinder).

As these scripts are embedded right in the HTML source, these will be discovered by the browser quite soon. The end result? They might get higher priority than files that actually are more important. That's correct, you don't want some third party to impact the download or parsing of your first party resources.

Although an anonymous function and async was used, this is not your best bet. A better approach would be one of the following:

  • embed them using Google Tag Manager, and then use the Window Loaded trigger type for such (less critical) third parties (a user might not start interacting with your search bar in the first few milliseconds);
  • If you're not using GTM, then be sure to just change their async attribute into defer to make sure they will only be executed after your first party resources;
  • If your own JS files are using a defer attribute as well, be sure to move those third parties to the end of your HTML source, while still using the defer attribute.

Do note you should not be using the async attribute anymore when using the defer attribute. Using both of them ends up in browsers favoring async, putting you back where you were.

This will potentially improve FID and LCP.

Your shop might be using a cookie notice. The one I tested did. This cookie notice was loaded via Google Tag Manager. Talking about chained requests, using GTM to load yet another third party actually delays the detection and thus download and execution of such third party.

This normally isn't an issue at all, as we read before. However, sometimes a para­graph within the cookie notice might cover more pixels than any other element on the page, maybe even bigger than your logo. As a result, such text node within your cookie notice will be considered the LCP. But as the cookie notice if displayed quite late in time and before the user started interacting with the page, it will negatively impact the LCP.

An easy fix is to user smaller sentences within your cookie notice, or try to speed up the download of your cookie notice, for example embedding it right into your HTML source and maybe even using yet another preconnect. Before you start to think preconnect is going to be your new best friend: only use it sparingly!

This will improve LCP.


Shopware 6 is a good boilerplate to start with, but just like any shop: there always will be customized work on top of it, or even third parties. Be sure to keep a close eye on performance yourself as well when developing with Shopware 6 and be familiar with technical UX.
If you want to read up on Core Web Vitals, my Core Web Vitals FAQ might help out.