fbpx

Largest Contentful Paint & Diagnosing Googlebot’s Render Budget



It was just over a year ago that Dan Leibson open sourced aggregate Lighthouse performance testing in Google Data Studio (yet another resource in the SEO industry inspired by Hamlet Batista). I’d like to share with you some observations I’ve made over the past year of running Lighthouse testing on our clients’ sites and how I think Google’s render service operates. 

 

I’ve noticed some interesting similarities between how successfully Google renders a page and the measurement thresholds in Core Web Vitals that define good scores. In this post I’ll share a few methods to investigate Google’s rendering and how I feel that relates to LCP. 

 

There are plenty of other resources by excellent SEOs if you need a general overview of Core Web Vitals. Today I’ll be talking almost entirely about LCP.

Darth Vader Chrome.jpg

Google’s Core Web Vitals – As Reflections Of Googlebot’s Tolerances

 

Here’s a quote from Google before I really dive into the realm of “SEO Theory”. These two quotes are from a Google Webmasters support thread, where Cheney Tsai compiles a few FAQs regarding Core Web Vitals minimum thresholds for acceptable performance; this part specifically about PWAs/SPAs.

 

Q: If my site is a Progressive Web App, does it meet the recommended thresholds?

 

A: Not necessarily since it would still depend on how the Progressive Web App is implemented and how real users are experiencing the page. Core Web Vitals are complementary to shipping a good PWA; it’s important that every site, whether a PWA or not, focuses on loading experience, interactivity, and layout stability. We recommend that all PWAs follow Core Web Vitals guidelines.

 

Q: Can a site meet the recommended thresholds if it is a Single Page Application? 

 

A: Core Web Vitals measure the end-user experience of a particular web page and don’t take into account the technologies and architectures involved in delivering that experience. Layout shifts, input delays, and contentful paints are as relevant to a Single Page Application as they are to other architectures. Different architectures may result in different friction points to address and meet the thresholds. No matter what architecture you pick, what matters is the observed user experience.

 

https://support.google.com/webmasters/thread/86521401?hl=en 

(bolding is mine)

 

I think the PWA/SPA conversation is especially interesting to the concepts I’d like to discuss here as it has relevance to “static HTML response vs rendered DOM”, and how js resources impact things at the highest level of complexity; but the concepts remain true at the lowest level of complexity too.

 

When Cheney says “different architectures may result in different friction points”, this is a polite way of saying that especially with PWAs/SPAs, there are likely going to be performance problems for both users and Google caused by complex js driven experiences. This delays time to LCP and FID, or potentially obfuscates content entirely from Googlebot. But this kind of problem doesn’t exclusively apply to PWAs/SPAs…

 

If the content isn’t being painted quickly enough, Google can’t see it, and neither will the user if they back out to search results after impatience with a spinning loading wheel. Google seems to have aligned their level of “render patience” with that of a typical user’s – or less. 

 

Googlebot has a render budget, and page speed performance optimizations for user experience (Core Web Vitals) are critical to how well Googlebot’s render budget is spent. 

 

This is a good place to define how I consider render budget, which is in two ways:

 

  1. The frequency in which Googlebot sends your URLs to its render service
  2. How much of your pages’ assets Googlebot actually renders (more on this later)

 

Core Web Vitals help us diagnose which page templates are failing in certain technical measurements, either through field data available in Google Search Console from CrUX data (Chrome users), or through aggregate Lighthouse reporting manually created. 

Lcp Gsc

Good, bad, good.

 

One approach I’ve been taking over the past year is to then try and connect the dots between those technical measurements, and resource loading failures from Googlebot measurable with other tools and techniques. 

 

What I’ve found is that these two things are often connected, and can help one diagnose the other. This method of diagnosis has also helped me uncover other render problems Googlebot is encountering that simple LCP testing in Chrome devtools/Lighthouse will not reveal.

 

Methods of Diagnosing Render Issues Related to LCP

 

There are 3 main types of render problems Googlebot is having with content that I find are related to Largest Contentful Paint, which I inspect in three different ways. 

 

  1. Live inspection render preview shows missing assets – URL inspection in GSC
  2. Google cache render snapshot shows missing assets – Google cache: inspection
  3. Last crawl result shows Googlebot errors with assets – URL inspection in GSC (and server log analysis)

 

Of these, number three has been the most interesting to me and one I haven’t seen discussed elsewhere in the SEO blogosphere, so I’d be especially curious for your thoughts on that.

 

(For clarity here, a page asset is any file requested with the page (css, js, images, etc.)

 

  1. Live inspection render preview shows missing assets

Here’s a “Tested page” screenshot in Google Search Console, using the inspection tool which sends out Googlebot Smartphone. The page is rendered and a mobile preview is returned with missing images. 

Gadsden Gsc

Wow, Gadsden sure looks nice to visit this time of year.

 

If URLs in a certain page template respond this way 100% of the time with live URL inspection, you can rest assured that Googlebot Smartphone is never rendering your images from this template correctly, and this is the type of broken page they are seeing. This isn’t a render budget problem, this is a “Google can’t render your content even when they try” problem (assuming you’ve ensured Googlebot is being delivered the js).

 

In the example above, all of the site’s images were delivered through js chunks that Googlebot was unable to render. I’ve encountered several sites like this, where LCP is flagged as high because users’ mobile devices take a long time to load the js before rendering images*, and Googlebot never sees the images due to its inability to render more complex frameworks. 

 

*A small note here, that LCP isn’t strictly about “how long it takes to load an image”, rather it is a measurement of how long it takes until the largest resource on the page loads, which could be anything; but is often an image.

 

       2. Google cache rendered snapshot shows missing assets

Here’s another type of scenario I’ve dealt with multiple times. Live URL inspection with the technique above sends out Googlebot Smartphone, and a clean rendered preview is returned. Images load, nothing looks broken. But when inspecting the last rendered cache snapshot from Google, I discover mixed results. Sometimes Google caches the page with no images, and sometimes it caches the page with images loading OK. 

 

Why is this? How can it be that pages in the same template, running the same code, load differently for Google sometimes? It’s really quite simple actually, Googlebot sometimes computes the rendered DOM and sometimes they don’t.

 

Why

 

Why doesn’t Google fully render every page on the internet all the time?

 

Well, in short,

89mDcOGUdrjb0GLldNA2wjqdypTiQcix THuhEI7Co6tKseGc5ZPSfESsx6tbyW HT0sHd49YsztmjoQLlUwkGTcqzhsjjwlyE0C6yvYY8 4zlTC0 ZzYCVfFP3LluOwq5Z1AiEq

“Money!” – Mr. Krabs 

Google can veil Core Web Vitals under the guise of “Think of the users!”… But they’ve been battling an endless struggle to crawl and render the javascript web, which is always evolving and growing in complexity. And sure, no one wants a slow loading site. But Google is also thinking of their pocket book. 

 

Perhaps @searchliason would disagree with this take, but from the outside looking in, it seems like if not the primary driver for the CWV update, then it at least is a convenient by-product of it. 

 

Crawling the web is expensive. Rendering it is even more so, simply because of the time and energy it takes to download and compute the data. There are more bytes to process, and js adds an extra layer of complexity. 

Mom

It reminds me of when my mom would be dismayed to find I used all of the color ink cartridge to print out 50 pages worth of video game guides, and every page has full footer, banner, and sidebar images of the gaming website’s logos.  

LCP 2000

Image via https://web.archive.org/web/20010805014708/http://www.gamesradar.com/news/game_news_1248.html 

Imagine these sidebars printed on every page of 50 pages in color ink 🙂 the year was 2000, and I was playing Star Wars Star Fighter…

 

But if I copy pasted those 50 pages into Microsoft Word first, deleted all of the color images, and printed in black and white, FAR LESS INK would be used, and mum would be less upset. The printer got the job done way faster too!

 

Google is just like mom (or a printer? I guess mom is the Google engineer in this analogy) and “painting” (rendering) a web page and all its images/resources (js/css) is the same thing as printing in color ink. The ink cartridge represents Google’s wallet.  

 

Google wants YOU to do the work, much like I had to do the work of manually removing the images before printing. Google wants you to make their life easier so that they can save money, and by becoming the leading force of Page Speed Performance, and literally defining the acronyms and measurements in Core Web Vitals, Google sets the standard. If you don’t meet that bar, then they will literally not render your site. 

 

That’s what this post is all about. If you don’t meet their LCP score (or other scores), a measurement bar they have set, then they will timeout their render service and not consider all of your content for Search eligibility.

 

Whereas view-source HTML, the static HTML, is like the black and white ink. It’s way smaller in size, quick to receive, quick to analyze, and thus CHEAPER for Google. Just because Google can sometimes crawl your rendered DOM, doesn’t mean they always will.

Disturbance In The Flow

LCP is an acronym related to other acronyms, like CRP, DOM and TTI.

 

Google would much prefer it if you invested in creating a pre-rendered static HTML version of your site just for their bots, so that they don’t have to deal with the complexity of your js. The onus of investment is on the site owner. 

 

I’m obligated to say that Google cache isn’t a definitive analysis tool, but my point here is that if Google can cache your pages perfectly 100% of the time, you are likely delivering a simple HTML experience. 

 

When you see Google encounter inconsistent errors in caching, it likely means they are having to rely on sending your content to their render service in order to view the content correctly, and further analysis in GSC/elsewhere should be made to figure out wtf is going on, if Google can/can’t properly see your content, especially when these things are happening at scale. You don’t want to leave this stuff to chance. 

 

      3. Last crawl result shows Googlebot errors with assets

 

This is where shit gets really interesting. When I encounter the scenario presented above (sometimes Google caches resources for a certain page template correctly, sometimes they do not, yet Googlebot Smartphone ALWAYS renders the content correctly in live URL inspections), I have found a pattern of crawl error type left behind in Google’s last crawl result. 

Crawl Date

Image taken from https://ohgm.co.uk/x-google-crawl-date/ 

 

This is a tab of Google Search Console I learned about from, in my opinion, the smartest technical SEO mind in the industry – Oliver H.G. Mason of ohgm.co.uk. It’s the “More Info” tab of URL inspections in GSC, where you can click “HTTP Response”, and see a provisional header left by Google, called “X-Google-Crawl-Date”. As you may have deducted, this is the date and time Googlebot last crawled the page. 

 

It was after reading this blog post and discovering this header that I began to pay more attention to the “More Info” tab when inspecting URLs. There are two other options in this tab: “Page Resources”, and “JavaScript console messages”. 

 

What I have found in the “Page Resources” tab, over and over again, is that Googlebot in the wild has a much lower tolerance level for asset heavy page templates than Googlebot Smartphone sent out in GSC live URL inspections. 

Other Errors

56 of 160 page resources weren’t loaded by Googlebot the last time they crawled this theater page – many of which were movie poster art .jpgs. But when I perform a live test with this same URL in GSC, there are only 5 to 10 page resource errors on average, mostly scripts.

 

These errors are vaguely reported as “Other error” with an XHR designation (other common possibilities are Script and Image). So WTF is an “Other error”? And why does the quantity of these errors differ so vastly between Google’s last crawl result in the wild, vs a live URL inspection in GSC?

 

The simple theory I believe is that Googlebot has a very conservative render timeout when crawling sites in order to save time and resources – which saves them money. This render timeout seems to align with the scores flagged as yellow and red in LCP. If the page takes too long to load for a user, well, that’s about the same amount of time (or less) that Googlebot is willing to wait before giving up on page assets. 

 

And that seems to be exactly what Googlebot does. As you can see from that screenshot above, Google chose to not render about ⅓ of the page’s resources, including those important for SEO: images of movie posters for the ticket showtimes! I’ve found that quite frequently, the images marked as errors here do not appear correctly in Google’s last rendered cache: snapshot of the same URLs.

Modern Art Render

These tiles are supposed to be thumbnail images for videos. Instead they are a sort of modern art block set of colored squares.

Scripts

The entire <body> is almost entirely scripts, Google rendered some of the page content but not all. At least we got some colored square tiles.

 

This is not something that you should leave to chance, like all things Googlebot it’s up to you to find these issues and manually diagnose them, then find ways to manipulate Google’s behavior for an outcome that makes their job of rendering your content easier. 

 

Otherwise, you are gambling with your site’s render and crawl budgets and hoping the automated systems figure out something close to optimal. I’d rather not. How to accomplish that is a post for another day.

 

There are problems with this GSC page resource error method

 

There is noise in GSC reporting, it can’t be done at scale easily, it can be unreliable or unavailable at times, and it isn’t 100% true for all sites that these generic XHR “other errors” marked in these last crawl reports align with other LCP issues I’m trying to diagnose. But it can still be useful for my diagnosis and testing purposes. 

 

A Google representative might say “These errors are an inaccurate representation of what’s happening in our algorithm, it’s much more complex than that” and that’s all fine and well. Their point may be that when the “real” render agent (e.g., unrestricted-non-render-budgeted-agent) is sent out, like a live URL inspection does, yeah, there are no page errors. And that “Sometimes” Googlebot in the wild will open up its render budget and occasionally do the same thing.

 

But I care about what Google is doing at scale when assessing huge quantities of pages, and when Google isn’t rendering every time, or giving up on the render because the page takes too long to load, that can become a huge SEO problem. 

 

It’s the same kind of thing when a canonical attribute is only visible in the rendered DOM but not the static HTML. It really doesn’t matter if Google can see the canonical properly when relying on the rendered DOM if they don’t do that 100% of the time for 100% of your pages. You’re going to end up with canonicalization inconsistencies.

 

But how are you going to do this at scale when Google limits us to inspecting only 50 URLs per day? This is the number one reason I wish Google would remove or raise this limit, aside from access to better information on where URLs are canonicalizing elsewhere when Google ignores canonicals, as one small example… We could rant for a while on that…

Hope

Is there any hope?

 

A little, if you have access to server logs I recommend comparing differences in errors between Googlebot’s various user agents and the number of times all of your page assets respond with anything other than 200 OK per user agent type. This will sometimes get you something similar to the last crawl page resources error reporting available in GSC.

 

Another small quick task I do is to sort all verified Googlebot crawl events by their # of occurrences and filter by URLs which are canonicalized to vs from. You can generally tell fairly easily when mass amounts of URLs are having their canonicals ignored by Google. 

 

Why do any of this? 

 

While it’s true that Lighthouse reporting and Chrome devtools may help you identify some of the assets that are causing issues for users with LCP, these other techniques will help you connect the dots to how well Googlebot is technically accessing your content. Lighthouse reporting is not perfect and has failed me where other methods were successful. Sometimes only Googlebot is experiencing server response issues while your node/Chrome LH testing does not. Sometimes websites are too complex for Lighthouse to analyze correctly.

 

Sometimes the water is muddier than it may seem from automated reporting tools, with mixed behavior for various Googlebots evident.

 

What about FID? CLS?

 

This post was mostly concerned with LCP, as I mainly wanted to discuss how Google’s render service times out on resources, and how that seems to be related to LCP scoring. LCP is also the most common problem I find sites struggling the worst with and usually more obvious to fix than First Input Delay. 

 

LCP also seems to be the most sensible place to start to me, as many of the same js issues that elongate LCP are also contributing to long times to FID. There are other areas of FID to think about like the critical rendering path, code coverage waste, paring down assets by page template, and so much more… But that’s an entire post in and of itself.

 

CLS is just so obviously bad for everything and easy to detect that it isn’t really worth discussing here in detail. If you have any CLS above 0.0, it is a high priority to resolve. Here’s a good resource.

 

Conclusions

 

I believe Google spends its render budget as conservatively as possible, especially on large sprawling sites, opting a majority of the time to rely on static HTML when possible. I’m sure there are certain things that trigger Google to adjust render budget appropriately, like its own comparisons of static HTML vs rendered DOM, and your domain’s authority and demand in search results from users. 

 

Perhaps all of the other pieces of the SEO pie, like link authority and content quality earn you a higher level of render budget as well through the automated systems because your site is perceived as “high quality” enough to render frequently.

 

“Is your site big and popular enough that we should spend the money rendering it, because our users would feel our search results were lower quality if we didn’t include you?” I’d be willing to bet that Google engineers manually turn the render budget dials up for some sites, depending on their popularity.

 

If this is not you, then you might consider optimizing for a theoretical render budget – or at the very least, optimize for tangible Core Web Vitals scores.

 

To start, I recommend checking out your LCP scores and diagnosing where Google (and Chrome) might be choking on some of your more complex resources. These are a few places to begin the analysis process.

 

  1. Create Lighthouse reporting in aggregate for all of your site’s most important templates
  2. Investigate GSC render with live URL test for URLs that have LCP issues
  3. Investigate Google cache snapshots for all important URLs with LCP issues
  4. Investigate GSC last crawl result “page resources” error reporting
  5. Compare static HTML vs rendered DOM, assess for possible areas of simplification that affect important page content

 



[ad_2]

Source link

Digital Strategy Consultants (DSC) © 2019 - 2024 All Rights Reserved|About Us|Privacy Policy

Refund Policy|Terms & Condition|Blog|Sitemap