Technical Core Web Vitals FAQ

Why is the CLS metric part of the Core Web Vitals?

CLS stands for Cumulative Layout Shift, meaning that shifts are happening during a pagevisit. Ads that are lazily loaded, might push other content down. This will impact user experience, as users have to reorient while reading. As Google knows this will impact UX and thus bounce, CLS has become a ranking factor.

What could impact CLS?

To give an idea of CLS, the following could impact layout shifts:

  • Images without dimensions or without a fixed parent;
  • Ads, iframes that are being inserted or loaded;
  • Dynamically inserting new elements and content on the fly (client side rendering);
  • Web fonts, changing characteristics of the text in regards of line-height, letter-spacing and letter-width.

How to track and detect CLS?

If you want to track and detect CLS, you would have to know if you want to track lab data CLS, CLS experienced by real users, or CLS for yourselves to be able to debug specific situations and conditions.

I wrote an article discussing 9 ways to track and detect CLS for yourselves and real users, which also contains a code snippet to test CLS for yourselves.

Why is the LCP metric part of the Core Web Vitals?

Google already tracked Largest Contentful Paint (LCP) data way before it became part of PageSpeed Insights or Core Web Vitals. The latter happened on May 27th, while LCP data of real users is already being tracked since at least september 2019.

In contrary to its predecessor, the First Meaningful Paint (FMP) metric, the LCP is telling a better story about the (possibly) most important element within the viewport. Possibly, as other content could still be more meaningful, but compaired to the FMP, it is clearer which element we are talking about when it comes to user engagement and tracking meaningful paints in general.

What could impact LCP?

To give an idea of LCP, the following could impact the largest contentful paint:

  • Bad hosting or a heavy CMS, for example with quite a few plugins, are more likely to result in a higher server response time and thus TTFB metric. As a results, the browser will receive the HTML later in time, and thus the LCP will be shifted back as well;
  • Render as well as parse blocking resources will prevent the browser to parse and render the HTML containing the largest element late in time;
  • Slow resources, such as resources responsible for the largest elements but served with quite some network latency, will start later in the process;
  • Not serving responsive and thus smaller images, for example on product detail pages or hero images on article pages;
  • Client side rendering when this is delaying the layout and composition of the largest element within the viewport.

How to track and detect LCP?

Just like CLS, there is lab data LCP, field data LCP and RUM LCP out there.

When debugging LCP myself, I'm using the following code snippet:

new PerformanceObserver(l => {
l.getEntries().forEach(e => {
}).observe({type: 'largest-contentful-paint', buffered:true});

This LCP snippet tracks all newly detected LCP's until first interaction. This actually is how Google Chrome is tracking LCP as well as part of Core Web Vitals: any new LCP elements are ignored when they appeared after (first) user interaction.

Why is the FID metric part of the Core Web Vitals?

The timedelay between the (first) moment of user interaction and the moment the browser was able to respond to the user interaction, is encapsulated in the First Input Delay (or FID) metric. Responses within 100ms are perceived as fast by the human eye. Slower response times on user interactions might be experiences as an unnatural delay, leading to distrust or user frustration.

What could impact FID?

To give an idea of FID, the following could impact the largest contentful paint:

  • Render blocking JavaScript;
  • Fully page load with deferred JavaScript;
  • Long JavaScript tasks, such as ordering of search results or checkout calculations;
  • Long JavaScript execution time;
  • Large JavaScript bundles.

Why is FID not part of lab data/Lighthouse?

Lab tools such as Lighthouse (or PageSpeed Insights, as PageSpeed Insights uses Lighthouse engine) can’t measure FID since there are no end-users to calculate interactions. Lab tools could simulate an input, but every user has a different experience depending on their constraints and thus also have another perception of performance. This matters as different users might start interacting at different intervals, resulting in different outcomes.

Instead of trying to simulate FID, these tools use a metric called Total Blocking Time (TBT) to predict potential FID bottlenecks. One of my articles might help to visualize FID bottlenecks. Do note the web performance working group is working on improving FID.

Why isn't FCP part of Core Web Vitals?

You could say the bar for LCP already is quite high. You should enable users to see the largest element as soon as possible. A fast FCP is still important towards early user engagement and thus reducing the bounce rate. For example, it is better to have an FCP of 1 second and a LCP of 2.5, than an LCP of 2.3 and an LCP of 2.4.

But Google might have chosen to not make FCP part of Core Web Vitals as the 'is it happening' aspect of use experience is already covered by the LCP metric.

Why is FCP still being shown in field data?

Although not being part of Core Web Vitals, you still see FCP being displayed in the CrUX or field data, when:

  • your site or shop has enough visitors;
  • you are checking PageSpeed Insights or Google Search Console data.

Reasons might be:

  • esthetics, 4 graphs is easier to divide over two columns than 3 graphs;
  • FCP still is the first noticeable metric after TTFB. TTFB is important, but FCP can still be pushed back depending on your theme or frontend code. As a results, FCP still gives you a good understanding of first user engagement. A white screen for a long period could increase bounce, so you might want to keep FCP (and still engaging with meaningful elements) as low as possible.