TABLE OF CONTENTS

Improve Google Pagespeed Insights on Jamstack Websites

Improving the Google Pagespeed Insights, Loading Speed & SEO – Why Even Bother?

Google PageSpeed Insights is a free tool that you can use for measuring the load time of a URL. The tool calculates the Core Web Vitals data and combines it with Lighthouse data, giving you a comprehensive estimate of your website’s performance.

Website Performance, Bounce Rate & Other Marketing Metrics

The main reason to improve your Google PageSpeed Insights score is to improve your web page’s user experience. If you want a website to rank higher in Google’s search results, you’re going to have to focus on your page speed & performance before thinking too much about SEO optimization.

In this article, we’re going to take a closer look at what you, as a Next.js developer, can do to improve your Google PageSpeed Insights score with nine tweaks.

Check out this case study for a high-performance Jamstack website we’ve built using Next.js and Sanity

See the results of our work for Kiwi Storage!

DISCOVER

1. Reduce the Amount of Javascript Executed on the Load

One of the most common issues with Jamstack apps reported by the Lighthouse is high JavaScript execution time. To understand how to fix it, we have to first understand how Next.js works.

When the page is being built on the server, Next runs our code to generate a static HTML page, that is then served to the client along with the JavaScript bundle (asynchronously of course). Thanks to this, the user sees the content of the page almost immediately, the page is not interactive just yet though, the process of making it interactive is called hydration.

In simple terms, hydration connects the code with the server-generated HTML – it re-renders the application in a way like a regular React app would.

With that knowledge in our minds, we can conclude that the code run during the server-side render is then run again on the client-side – even though not all of our code has to be. So, what’s the solution? We should do as much computation server-side ONLY as we can

A good example would be parsing the CMS API responses to React component props. Let’s say that a section consists of tiles, each tile has an image, a title, and a link. It may be tempting to pass the whole API response representation of that section as props of the component and parse it in a hook. But then that hook would be run both on the server and the client – even though the output would be the same. Instead, we should move the parsing logic into the getStaticProps and pass the result as props to the component.

This is the way it’s usually handled:

export const useTiles = (data) => {
 const { tiles } = data;
 
 return tiles.map(({ id, title, image, link }) => {
   // Extracts and computes the necessary fields from the image object
   const imageData = getImageData(image);
   // Extracts and computes the necessary fields from the link object
   const linkData = getLinkData(link);
 
   return {
     id,
     title: title ?? "",
     image: imageData,
     link: linkData,
   };
 });
};

A hook transforms the API response

export const TilesSection = (props) => {
 const { data } = props;
 
 const tiles = useTiles(data);
 
 return (
   <section>
     {tiles.map((tile) => (
       <Tile key={tile.id} {...tile} />
     ))}
   </section>
 );
};

The component uses the transformed data to render the UI

And here’s an example of a better implementation

export const getTilesSectionProps = (data) => {
 const { tiles } = data;
 
 return {
   tiles: tiles.map(({ id, title, image, link }) => {
     // Extracts and computes the necessary fields from the image object
     const imageData = getImageData(image);
     // Extracts and computes the necessary fields from the link object
     const linkData = getLinkData(link);
 
     return {
       id,
       title: title ?? "",
       image: imageData,
       link: linkData,
     };
   }),
 };
};

Extract the transformation logic to a function

export const getStaticProps = async () => {
 /* Code for fetching the page */
 
 const pageData = /* extracted page data */
 
 return {
   props: {
     ...pageData,
     sections: pageData.sections.map((section) => {
       switch (section.type) {
         case "tilesSection":
           return getTilesSectionProps(section);
 
       /*
           cases for all other sections
       */
 
         default:
           return section;
       }
     }),
   },
 };
};

Replace the API response of the section with the props of the component using the previously created function

2. Include Lazy Load Images

Images are key when you’re trying to improve Google PageSpeed Insights score. Images are some of the most notorious page speed killers, which is why it’s important to handle them properly. 

The most common mistake is loading the images all at once while instead, they should be lazy-loaded. Lazy loading means that images are only downloaded when they are (almost) inside of the viewport, meaning that for example an image on the bottom of the page won’t be loaded until the user scrolls to it – saving precious milliseconds. The time it takes for the pages to load should be as short as possible. It’s key to optimize your scores and minimize the bounce rate.

Next.js comes with a built-in component called Image, that implements lazy loading (and more) by default. 

Example usage of the image component:

<Image
 src="author.png"
 alt="Picture of the author"
 width={500}
 height={500}
/>

3. Use the Latest Image Formats, Use Changing Size

The size of an image differs based on its format. For example, a png image weighs more than a jpg but its quality is much better. In recent years new image formats like web and Aviv, designed especially for the web have been created. They combine great quality with low size. 

Unfortunately, since images on Jamstack websites are usually provided by the user in the CMS, it’s not possible to have all of them in one of these formats. The solution is to use Image Optimization cloud providers like Imgix (or use the built-in one provided by Next.js). Some of the CMS like Sanity come with image optimization built-in. When using the Next.js image component, we can define a loader and append query params that will make the provider convert the image to one of the aforementioned formats. This alone will greatly improve Google PageSpeed Insights score.

Quick poll

What is your Google PageSpeed Insights score?

12 votes

Optimize the Size of Your Images and Improve Load Times

Another thing is to size the images properly. Say we have a huge image, 4036x1024px for example. Unless the user has a 4k display, the extra pixels won’t make a difference. Well, apart from making the page load much longer. Not exactly what we want. This effect is even more magnified on mobile devices because of smaller screens, and lower CPU and network performance. This is where srcsets come in. Basically, it’s a way to tell the browser which image should be loaded for the given screen width. Combining it with the aforementioned image optimization cloud providers yields great results. Luckily, when using Next.js we don’t have to do it manually, the Image component will do it for us. By using the loader function, next will generate an srcset using the default value of the config file’s imageSizes property (we can also provide our own sizes).

An example of a loader used with Sanity CDN:

const loader: ImageLoader = ({ src, width, quality }) =>
 `${src}?w=${width}&q=${quality || 75}&auto=format&fit=max`;

An example of usage of the loader:

<Image
 loader={loader}
 /*
 * the rest of the props
 */
/>

4. Lazy Load Scripts

Jamstack websites often include 3rd party scripts like Google Tag Manager cookie consent managers, newsletter pop-ups, and other page speed killers. You really need to get rid of some of them if you’re trying to improve Google PageSpeed Insights score. Running so many scripts can cause even the most optimized website to score poorly in a lighthouse test. Some third-party scripts have a great impact on loading performance and can reduce the user experience drastically, especially if they are render-blocking or delay page content from loading.

One thing we can notice is that not all of those scripts have to be loaded immediately. For example, showing the cookie consent banner or newsletter pop-up 5-10 seconds after the user visits the site won’t hurt the user experience. Quite the opposite – it will greatly increase the lighthouse score (thus increasing the user experience).

To achieve this we can utilize Next’s built-in Script component. Currently, it offers four different loading strategies:

beforeInteractive – load before the page is interactive

Using this strategy causes the script to be injected into the initial HTML from the server and run before self-bundled JavaScript is executed. It should be used for any critical scripts that have to be fetched and executed before the page is interactive. Some examples of scripts that should be loaded using this strategy are bot detectors and 3rd party libraries that have to be loaded before the JavaScript code is executed. It is important to note that this strategy can be applied ONLY to scripts that are inside the Next.js custom document component.

afterInteractive –  load immediately after the page becomes interactive

Scripts using the afterInteractive strategy are injected client-side and will run after hydration. This is a great strategy for scripts that should be run as soon as possible, but don’t have to do so before executing the JavaScript bundle. A perfect solution for tag managers and analytics.

lazyOnload – load during idle time

The lazyOnload strategy causes the scripts to be loaded after all resources have been fetched and during idle time. It’s a perfect match for low-priority scripts that don’t need to be run immediately, such as chatbots, cookie consent managers, or newsletter pop-ups.

worker – load in a web worker

This is the latest feature of web development, scripts utilizing the worker strategy are executed in a web worker using Partytown. This allows us to offload the work from the main thread to a background one. One of the most common issues reported by Lighthouse is high main thread load, Partytown is a promising solution for that. 

However it is still an experimental feature, since the library itself is still in beta, that doesn’t mean we shouldn’t use it, but we have to be cautious. I have successfully used it with Google Tag Manager and some cookie consent managers.

WANT TO IMPROVE WEBSITE PAGESPEED?

5. Utilize Code Splitting

Let’s say our app contains fifty different sections. Usually, there will be a single component called Sections that takes an array of sections and based on their type renders them using corresponding components. The problem with such a solution is that we’re always loading all of our sections into our bundle even though our page may only use a couple of them.

The old, inefficient way:

import Section1 from "@sections/section1";
import Section2 from "@sections/section2";
/*
* ...
*/
import Section49 from "@sections/section49";
import Section50 from "@sections/section50";
 
 
export const Sections = (sections) => {
   return sections.map((section) => {
       switch (section.type) {
       case "section1":
           return <Section1 {...section} />;
  
       case "section2":
           return <Section2 {...section} />;
  
/*
* ...
*/
 
       case "section49":
           return <Section49 {...section} />;
  
       case "section50":
           return <Section50 {...section} />;
  
       default:
           return section;
       }
   });
}

Luckily, next.JS Provides Us With a Useful Tool Called Dynamic.

It takes a function that returns a dynamic import of a component and returns a component that wraps the imported one. 

We can use the returned component just as we would use the imported one. The difference is that now the code of that component will be put in a separate bundle and will be downloaded only when the component has been mounted. What that means is that if a page consists of four sections, only the code of four components will be included in the bundle, greatly reducing its size and improving the page speed rating.

To improve Google PageSpeed Insights score, try the new, optimized way:

import dynamic from "next/dynamic";
 
const Section1 = dynamic(() => import("@sections/section1"));
const Section2 = dynamic(() => import("@sections/section2"));
/*
* ...
*/
const Section49 = dynamic(() => import("@sections/section49"));
const Section50 = dynamic(() => import("@sections/section50"));
 
export const Sections = (sections) => {
 return sections.map((section) => {
   switch (section.type) {
     case "section1":
       return <Section1 {...section} />;
 
     case "section2":
       return <Section2 {...section} />;
 
     /*
      * ...
      */
 
     case "section49":
       return <Section49 {...section} />;
 
     case "section50":
       return <Section50 {...section} />;
 
     default:
       return null;
   }
 });
};

6. Optimize SVGs (Img Src vs SVG Markup)

SVG images are often used for the site’s logo in the header or as icons in different sections of a typical Jamstack website. Unfortunately, when a user uploads an SVG image to a CMS, the image is returned as a link to a CDN instead of the markup. Let me give you a couple of examples of why this might be an issue:

Let’s say you have a section with a list of features, each feature has an icon and some text associated with it – having 10 features means 10 extra requests for those icons. That’s not what we want. Not only will it lower our lighthouse score but it is also going to result in a poor user experience. The user doesn’t want to see placeholder images for icons for the first second of the page load.

Be Careful With the Large Images. They Slow Down the Load Time of Your Sites.

Another issue is that if you’re using Next’s Image component (which you should be), while the image is loading, a placeholder is displayed in its place. This is a great feature for big images, like the background of a hero section, but not so good for a website’s logo in the header, which is usually the first thing the user sees.

Luckily, there’s a simple solution for both of these problems. We can fetch the markup from the CDN URL during the build (for example in getStaticProps) and pass it as a prop of our image component. We have to be careful about that though because inlining SVGs increases the size of the HTML which impacts lighthouse scores. It’s fine to inline 10 – 20 small images like the aforementioned icons, but we don’t want a big, complex SVG that weighs a couple of hundred kilobytes to bloat out the HTML size.

A util function that fetches the SVG markup from a URL:

export const fetchSvgMarkup = async (url) => {
 const result = await fetch(url);
 
 return result.text();
};

7. Utilize Next Image Component Priority Prop

In recent years browser support for preloading images has grown and is present in all most popular browsers. The image component does it all for us simply by setting a single prop – priority.

Preload an Image First and Optimize the Web Page Loading.

Preloading an image greatly increases the LCP (Largest Contentful Paint) score, which measures the time it takes to paint the largest element in the initial viewport. This is perfect for the background image of a hero section (that is present on almost every page of a Jamstack website) or for full-screen images.

Proceed With Caution

This feature has to be used with caution, setting the priority prop on too many images on a single page, may actually bring the opposite result and slow the page down!

It’s hard to determine whether the section will be visible in the initial viewport, so it might be a better idea to add a checkbox to the image model in the CMS with the information on when the user should check that checkbox.

“We ran tests on a site that uses JavaScript to lazy-load responsive images. Preloading resulted in images loading 1.2 seconds faster.”

https://web.dev/preload-responsive-images/

An example of priority prop usage:

<Image
 src="big_background.png"
 alt="The background image of a hero section"
 width={1920}
 height={1080}
 priority
/>

8. Use Resource Directives

Resource Directives tell the browser which resources it should load first. Doing this correctly speeds up the loading time of a page. There are three most important directives:

dns-prefetch 

DNS-prefetch tells the browser to set up a DNS connection. You should use it for all critical resources on external domains.

Use cases for dns-prefetch are the same as for preconnect, preconnect is a little more expensive operation and should be used for the most important links. Whereas dns-prefetch should be used for all others or as a fallback since the browser support is better than for preconnect.

preconnect

This does the same as dns-prefetch, but also performs TCP negotiation and TLS handshake. It should be used for external sources that you know the browser will need quickly.

A good example of using this directive is pre-connecting to 3rd party font providers like google fonts. Other use cases may be pre-connecting to a 3rd party provider with a stylesheet needed for some libraries, for example, some carousel libraries host the CSS needed in a CDN. 

Pre-connecting also makes sense for CDN that hosts images used on our website, for example, when using Sanity, all of the images are stored under https://cdn.sanity.io/, so pre-connecting will give us a great boost in performance.

preload

Prelod tells the browser to load the specified file to the cache. It should be used for files that are required to display the page, such as a font that’s specified in a stylesheet.

Besides caching the resource, the preload directive is also a hint to the browser to load the specified file as soon as possible. Some use cases of preload would be:

Fonts defined in stylesheets

The font faces declared in css will start loading after the css has been loaded and parsed, preloading causes the browser to load those fonts immediately.

Images defined in stylesheets

Images referenced from a stylesheet won’t start loading until the CSS file is downloaded and parsed. Preloading those images will mitigate that issue. 

Example of using the preconnect directive for Google Fonts:

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />

9. Load Libraries Dynamically

Third-party libraries are great, but they can weigh quite a bit, which causes the website to lose precious lighthouse points. By default, the source code of those libraries is placed in the JS bundle fetched when the user visits the page.

Say for example we’re using Formik to handle the contact form on the bottom of a page or Algolia for searching. None of those features (or rarely) are used immediately after visiting the page. What we can do is use dynamic imports to load them later in a separate bundle. 

For the Algolia, we can import its source code when the user focuses the search input, during the time he’s inputting the query, the library will be ready. 

As for Formik, we can start fetching the library on the first user interaction with the page, like scroll or touch event.

Example of dynamically using a package using the scroll event:

useEffect(() => {
   const handleScroll =  async () => {
       const lib = await import("some-library");
 
       // Use the library, or assign it to a ref
   }
 
 document.addEventListener('scroll', handleScroll, {
   once: true,
 });
 
 return () => {
   document.removeEventListener('scroll', handleScroll);
 }
}, []);

Summary – Getting the Search Engine to Work For you With Next.js

As you see, you don’t need to go looking for a new hosting company, migrate your blog from WordPress, or do a complete SEO audit of your site to improve Google PageSpeed Insights score and website performance in general. As a developer, there are a lot of clever solutions you can apply to improve your website’s page speed, page load, and all the other Google metrics.

Are you a developer looking for smart solutions & ideas? Consider signing up for our newsletter for weekly updates!

Article link copied

Close button