New JS metric: Meet Interaction to Next Paint (INP) metric

New JS metric: Meet Interaction to Next Paint (INP) metric

"Interaction to Next Paint", INP for short, measures the time between a user interaction (click, tab, type) and basically the visual feedback to a user. And it should be below 200ms.

So, you just got used to LCP, CLS and FID. And now INP might soon join the party. The Interaction to Next Paint is very fresh. While Google already wrote about the INP metric on their web.dev environment, Google will elaborate on this metric a bit more during Google I/O (Thursday, May 12th).

Follow-up of Responsiveness metric

You might remember me saying before that a new metric was coming. This was the case when I was covering the introduction of the Responsiveness metric.

The INP metric is the brushed up version of the Responsiveness metric. "Interaction to Next Paint" just means a new and final name for Responsiveness and a slightly new way of measuring things.

Google currently only published INP data of the last few months (do note: still an experimental metric). And even Google's INP article on web.dev is very fresh. So let's dive into this new metric a bit more:

  • what is known about this metric;
  • what I know from other metrics;
  • and what I expect from the new "Interaction to Next Paint" metric.

Yet another metric

The fact that INP is only introduced now, doesn't mean the UX bottleneck wasn't already there before. Google also isn't introducing this metric to annoy us; They are just in a quest of giving us a better idea of the impact on real users and make it visible for a wider range of stakeholders and roles.

Is INP really about JavaScript?

The INP metric isn't necessarily about JavaScript. But as INP -just like FID- is impacted by other main thread blocking work, which often is caused by long or too much JavaScript tasks, you'll find that JavaScript will often be the biggest offender.

How is INP different than the FID metric?

Only Google knows if INP will replace FID in the Core Web Vitals toolbox. But for know, they might actually live next to each other, FID still being a Core Web Vitals metric and INP not being a Core Web Vitals metric yet.

FID only accounts for the delay after the first interaction, hence First Input Delay. FID is about a first good impression, just like FCP and LCP metrics. INP is more like the CLS metric, as it is being tracked during the whole life cycle of a page.

if the first interaction [...] has little to no perceptible input delay, the page has made a good first impression

web.dev/inp/

INP however, works a bit differently:

INP introduction

If you didn't feel like reading about the Responsiveness metric, here's a short introduction:

We already had First Input Delay (FID) metric. FID will track the time between first user interaction and the first moment the browser was idle and able to act on the user interaction. This both didn't include the delays during all other interactions nor visual feedback. The visual feedback (in the form of a browser paint) could take a bit of time as well, and actually is more important as that's what users will notice.

So that's where Responsiveness or nowadays INP came up, tracking all interactions during a page life cycle. It even added nuances to different interaction types, such as click, tab or type.

INP thresholds

Just like all other metrics, INP experiences is also divided into 3 buckets: good, moderate and poor. And their thresholds are as following:

  • anything up to 200ms is considered good;
  • between 200 and 500ms is considered moderate;
  • and anything above 500ms is considered a poor experience.

These buckets are illustrated in the image below, as published on web.dev/inp/.

Aim for lower INP

While 200ms is the threshold of the good bucket, you might want to aim a bit lower to anticipate on a situation where:

  • you're getting more visitors using mid-range devices or for example when a developer;
  • your website/webapp keeps growing, introducing more client side and main thread work via JavaScript;
  • marketers or SEO specialists added additional third parties.

INP threshold differences with Responsiveness metric

But how is this different from the Responsiveness metric? Looking at the responsiveness metric, budgets were 50ms for typing and 100ms for click and tab events. But when having different budgets, how to embed this in one overall metric with 3 budgets. You would then have to split up the metric in two: one for each interaction type.

INP challenges

Let's dive into INP challenges.

The role of third parties

If we would all be implementing third parties in a non-blocking way, for example by always using a (Google or Adobe) tag manager, then we wouldn't be giving third parties a chance of impacting the FCP or LCP metric. And CLS metric often isn't caused by third parties either (saying this based on audits I've done).

When it comes to FID, it could be 50/50; it really depends on when the first interaction would happen. As soon as the page is visible, users might already interact with the page before third parties were given the chance of being executed. And you've got yourself a good FID score.

This limitation is what INP is trying to solve. But this also means: Third parties will now have a bigger chance of impacting INP and -if INP becomes part of Core Web Vitals- impacting Core Web Vitals even more.

While it already was a good idea to put third parties on a diet, we now just have a better way of getting an idea of the impact. It will be hard though to tell if a bad INP score is the result of 1st party of 3rd party performance offenders.

1st party INP challenges

The following already was important. One could say that by introducing the INP metric, Google is just trying to give us insights into bottlenecks, for free.

So, how to translate the INP metric to the real world? The following is important to achieve:

  • when adding a product to the cart, fetching additional product information might be needed. But when only showing a feedback after the API request was done, you can't know how long this could take across all users and their conditions. Be sure to show feedback as soon as possible, and offload the background work as much as possible;
  • Be sure to make live search results fast, or if not doable because of platform limitations, use third party solutions or switch to an on-site search that depends on less JavaScript. A server side search solution could then be the way to go;
  • Image galleries, showing other product information based on a newly selected product-size or colour, should be fast to respond. If this isn't doable because showing new details depends on additional API requests, be sure to show visual change that will be there temporary until new product information was fetched and is ready to be injected into the webpage;
  • But even showing the hamburger menu after opening it should happen very fast. This often doesn't need to involve a lot of JavaScript. However, before Wix started improving their boilerplate, a lot of Wix websites suffered from such issues, despite this being a relatively basic interaction.