Even clients that are convinced they should work with me, still want to know how a collaboration and an auditing trajectory is going to look like. This article is giving background information.
On average, clients find me via LinkedIn or via their relations mentioning my name. For example fellow-specialists or product owners within other merchants and agencies. And when someone is reaching out, they often have one of the following roles:
- CTO or COO
- tech lead
- product owner
- marketing specialist
- or SEO specialist
(Always) never a developer
That's right: although being a developer myself, it's almost never a developer that is reaching out to me directly. Reasons are that developers often aren't in the lead or aren't responsible for results (for example in the form of revenue). That's because they often aren't involved in business decisions either.
Looking at my past self, being stubborn could also play a role. Somewhere between when I started programming (2002) and now, I had a phase where I thought I knew it all. But now being in a specific niche and still learning everyday, I couldn't be more wrong. Dunning-Kruger effect all over.
Contacting me
The way potential clients are contacting me differs. It's ony a small group that is calling me directly on my mobile phone when finding me. Some will send me a direct mail, others will use one of my contact forms. My audit and training pages are having their own forms. Which helps me, because when those are used, I know where their primary needs lie.
Some of them are reaching out to me via LinkedIn. Especially when my name is being tagged in pagespeed posts. And some will even do a double background check:
Found you by typing website speed consultant in Google and looking for the first non-paid organic result. Then looked you up on Linkedin and saw you had a nice 6k+ follower base and interesting posts.
Someone who contacted me via LinkedIn
This feedback actually lead me to creating a little LinkedIn landing page on my very own website, with direct links to LinkedIn profile sections.
Pre-audit
I often start with a 5 minute pre-audit, even before the assignment was confirmed. Five minutes always becomes more though, because one thing is leading me to the other and then becomes forensic very fast. As a result, I only do this when I consider a case to be legit and them being serious about improving performance, conversion and SEO.
This prevents me from doing free audits only ;)
I'm doing a pre-audit because I have to know what I'm dealing with. And although using fixed pricing for different initial setups, there could be some deviation based on possible outcomes of a website or webshop audit.
When I'm running into a big bottlenecks that is really blocking user experience, I share them right away.
Looping back
In all non-EU cases, I'm the one reaching out and coming back with feedback first. Roderik, with who I'm collaborating will be the one to loop back for EU cases. The reason is simple: this alone can cost quite some time, and Roderik is a bit better in speaking merchant and agency-language too, tailoring the needs.
When a pagespeed audit is approved
Once a pagespeed audit is approved, I start to use different toolings:
- not the newest laptop ;)
- public data as well as Google's historic CrUX data of a website
- view-source in Google Chrome
- Webpagetest
- Chrome DevTools
- RUM data
Lighthouse missing in action
You might have noticed that Lighthouse is actually missing in this list. That's not just because there is no relation between your Lighthouse lab data score and SEO. But actually because although having a bad Lighthouse score, the experience in real life could still be good. As a matter of fact, this is the reason why Google changed the layout of their PageSpeed Insights page.
Moreover, downsides of Lighthouse are
- it doesn't take into account the order of resources. So it won't flag A/B testing that's put behind other scripts and stylesheets;
- it will also miss @import in another stylesheet, as well as invalid <head> elements that could break the <head>
- while it won't look at your audience and their conditions.
For example, Lighthouse could come up with webp recommendations to shave of 3 seconds. But maybe the internet speed isn't the biggest challenge amongst your audience, so it might not need the same priority as Lighthouse is telling you.
Toolings explained
The above might also explain better why I'm using those other tools. First of all an older laptop. Your users might also not be using the newest device and best internet connection, so there is no point in auditing under optimal conditions.
By looking at public data of a webshop, I get a sense of what real users and their conditions look like. For example:
- 100% 4G users, or still some 3G users as well;
- and no FID or responsiveness issues despite a lot of JS could be a signal of an audience with high-end devices;
- and UX differences between FCP and LCP or between the homepage and origin data is telling me something as well.
Then, I'm looking at Webpagetest data and the raw source code.
The raw source code is the very first boilerplate that a browser has to deal with. Looking at it enables me to get an idea of what the first homework for the browser looked like.
Webpagetest will then confirm what I'm seeing in the source code, and this also works the other way around. But Webpagetest is telling me way more, obviously. Metrics, dependancies, priorities, render blocking resources et cetera. But it might also miss render blocking resources, for example when those resources are preloaded. Another reason to also look at the raw source code.
When I need to dive deeper or want to reproduce a behaviour to get a suspection or finding reproduced, I turn to Chrome DevTools. But also to check basic information such as request and response headers.
Chrome DevTools also is convenient for testing code on the go, that I might then include in an audit as part of a solution per bottleneck.
Real User Monitoring
But even then I might be missing something, as I can't simulate conditions of someone's audience. Because no user nor their conditions are the same. Maybe the TTFB looks good on my end, but not for those typically coming from Facebook, Google or newsletter ads and campaigns have worse TTFB. Or those with low-end devices are impacting overall data the most.
Experiences could be different per template. It could even be different amongst pages with the same template. And I don't always get to see this from the outside, because not all pages are eligible for Core Web Vitals yet due to a low amount of pageviews. Or because a website or webshops isn't tracking real UX themselves yet.
No better way of gaining such insights and getting an idea of the factors by tracking pagespeed and Core Web Vitals metrics of audience with real visitors of a shop. So I provide them with a little snippet to collect those insights without collecting revenue or personal identifiable information. This way, I also don't need any access to their toolings with maybe sensitive data, such as Google Analytics or Google Search Console.
Training sessions
Some audits are combined with pagespeed training sessions. The exact training sessions depends on the in-house knowledge and roles of attendees.
I often already host a session before sending over the audit. Some findings will already be discussed though. This way, a team already has in-depth knowledge, even use-case specific. This makes it easier to interpret the audit and is likely to reducing developer/implementation time as well.
Finalizing
From there, after sharing the audit and maybe training session recording, audits could be combined with follow-up audits, a few sessions or periodic sessions. Solutions such as RUM is enabling merchants and agencies to keep tracking of the impact of deploys themselves.To achieve this with a friendly priced solution, I teamed up and started RUM vision.