TABLE OF CONTENTS

Improve Google Pagespeed Insights on Next js Websites

Improve Google Pagespeed Insights on Next js Websites

Improving the Google Pagespeed Insights, Loading Speed & SEO – Why Even Bother?

Google PageSpeed Insights is a free tool that you can use for measuring the load time of web pages. The tool calculates the Core Web Vitals data and combines it with Lighthouse data. It gives you a comprehensive estimate of your Next js website’s performance. A good score for site speed is considered above 90 points, and that’s what everyone is reaching out to – and so you should too. And so, in this article, we’re going to take a closer look at what you, as a Next.js developer, can do to improve your Google PageSpeed Insights score with ten tweaks.

Website Performance, Bounce Rate & Other Marketing Metrics

The main reason to improve your Google PageSpeed Insights score is to improve your web page’s user experience. It is also one of the most important ranking factors. If you want a website to rank higher in Google’s search results, you’re going to have to focus on your page speed and site’s performance. And do this before thinking too much about SEO optimization of the content.

We encourage developers to think broadly about how performance affects a user’s experience of their page and to consider a variety of user experience metrics. Although there is no tool that directly indicates whether a page is affected by this new ranking factor, here are some resources that can be used to evaluate a page’s performance.


Zhiheng Wang and Doantam Phan about “The Speed Google Update”

1. Reduce the Amount of Javascript Executed on the Load

One of the most common issues with Next js apps reported by the Lighthouse is high JavaScript execution time. To understand how to fix it, we have to first understand how Next js works.

When a new page is being built on the server, Next runs our code to generate a static HTML page, that is then served to the client along with the JavaScript bundle (asynchronously of course).

Thanks to this, the user sees the content of the page almost immediately, the page is not interactive just yet though, the process of making it interactive is called hydration.

In simple terms, hydration connects the code with the server-generated HTML – it re-renders the web application in a way like a regular React app would.

With that knowledge in our minds, we can conclude that the code run during the server-side render is then run again on the client side – even though not all of our code has to be. So, what’s the solution? We should do as much computation server-side ONLY as we can

A good example would be parsing the CMS API responses to React component props. Let’s say that a section consists of tiles, each tile has an image, a title, and a link. It may be tempting to pass the whole API response representation of that section as props of the component and parse it in a hook.

But then that hook would be run both on the server and the client – even though the output would be the same. Instead, we should move the parsing logic into the getStaticProps and pass the result as props to the component.

This is the way it’s usually handled:

export const useTiles = (data) => {
 const { tiles } = data;
 
 return tiles.map(({ id, title, image, link }) => {
   // Extracts and computes the necessary fields from the image object
   const imageData = getImageData(image);
   // Extracts and computes the necessary fields from the link object
   const linkData = getLinkData(link);
 
   return {
     id,
     title: title ?? "",
     image: imageData,
     link: linkData,
   };
 });
};

A hook transforms the API response with the following command:

export const TilesSection = (props) => {
 const { data } = props;
 
 const tiles = useTiles(data);
 
 return (
   <section>
     {tiles.map((tile) => (
       <Tile key={tile.id} {...tile} />
     ))}
   </section>
 );
};

The component uses the transformed data to render the UI.

And here’s an example of a better implementation:

export const getTilesSectionProps = (data) => {
 const { tiles } = data;
 
 return {
   tiles: tiles.map(({ id, title, image, link }) => {
     // Extracts and computes the necessary fields from the image object
     const imageData = getImageData(image);
     // Extracts and computes the necessary fields from the link object
     const linkData = getLinkData(link);
 
     return {
       id,
       title: title ?? "",
       image: imageData,
       link: linkData,
     };
   }),
 };
};

Extract the transformation logic to a function with the following code:

export const getStaticProps = async () => {
 /* Code for fetching the page */
 
 const pageData = /* extracted page data */
 
 return {
   props: {
     ...pageData,
     sections: pageData.sections.map((section) => {
       switch (section.type) {
         case "tilesSection":
           return getTilesSectionProps(section);
 
       /*
           cases for all other sections
       */
 
         default:
           return section;
       }
     }),
   },
 };
};

Replace the API response of the section with the props of the component using the previously created function.

Need help in improving your pagespeed?

2. Include Lazy Load Images

Loading times of images are key when you’re trying to improve Google PageSpeed Insights results. Images in your media library are some of the most notorious page speed killers. And that’s why it’s important to handle them properly. 

The most common mistake is loading the images all at once instead of lazy loading. Lazy loading downloads images only when they approach the current screen’s view. For instance, the browser won’t load an image at the bottom of the page until the user scrolls down to it, thereby saving precious milliseconds. The time it takes for the pages to load should be as short as possible. It’s key to optimize your Core Web Vitals assessment scores and minimize the bounce rate.

Next js comes with a built-in component called Image, that implements lazy loading (and more) by default. 

Example usage of the image component:

<Image
 src="author.png"
 alt="Picture of the author"
 width={500}
 height={500}
/>

3. Use the Latest Image Formats, Use Changing Size

The size of an image differs based on its format. For example, a png image weighs more than a jpg but its quality is much better. In recent years, creators have developed new image formats like WebP and AVIF specifically for the web to optimize resource usage. They combine great quality with low size and so they load quickly. 

Unfortunately, since images on Jamstack websites are usually provided by the user in the CMS, it’s not possible to have all of them in one of these formats. The solution is to use Image Optimization cloud providers like Imgix (or use the built-in one provided by Next.js).

Some of the CMS like Sanity come with image optimization built-in. When using the Next js image component, we can define a loader and append query params that will make the provider convert the image to one of the aforementioned formats. This alone will greatly improve Google PageSpeed Insights score.

Quick poll

What is your Google PageSpeed Insights score?

226 votes

Optimize the Size of Your Images and Improve Load Times

Another thing is to size the images properly. Say we have a huge image, 4036x1024px for example. Unless the user has a 4k display, the extra pixels won’t make a difference. Well, apart from making the page load much longer. Not exactly what we want.

This effect is even more magnified on mobile devices because of smaller screens, and lower CPU and network performance. Mobile versions have usually a much lower site speed. This is where srcsets come in. Basically, it’s a way to tell the browser which image it should load for the given screen width.

Combining it with the aforementioned image optimization cloud providers yields great results for your page’s performance. Luckily, when using Next js we don’t have to do it manually, the Image component will do it for us. By using the loader function, Next will generate a srcset using the default value of the config file’s imageSizes property (we can also provide our own sizes).

An example of a loader used with Sanity CDN:

const loader: ImageLoader = ({ src, width, quality }) =>
 `${src}?w=${width}&q=${quality || 75}&auto=format&fit=max`;

An example of usage of the loader:

<Image
 loader={loader}
 /*
 * the rest of the props
 */
/>

4. Lazy Load Scripts

Jamstack websites often include 3rd party scripts like Google Tag Manager cookie consent managers, newsletter pop-ups, and other tools, which are in fact page speed killers. You really need to get rid of some of them if you’re trying to reach a perfect page speed insights score.

Running so many scripts can cause even the most optimized website to score poorly in a Google Pagespeed Insight test. Some third-party scripts have a great impact on loading site performance and can reduce the user experience drastically, especially if they are render-blocking or delay page content from loading.

One thing we can notice is that not all of those scripts have to load immediately. For example, showing the cookie consent banner or newsletter pop-up a few seconds (5-10) after the user visits the site won’t hurt the user experience. Quite the opposite – it will greatly increase the lighthouse performance score (thus increasing the user experience).

To achieve this we can utilize Next’s built-in Script component. Currently, it offers a few ways to achieve this (four different loading strategies):

beforeInteractive – load before the page is interactive

Injecting a script into the initial HTML from the server allows it to run before the self-bundled JavaScript executes. Use this strategy for any critical scripts that need to load and execute before the page becomes interactive.

Scripts that typically require this loading strategy include bot detectors and third-party libraries that must load before the JavaScript code runs. Remember, you can apply this strategy exclusively to scripts within the Next.js custom document component.

afterInteractive –  load immediately after the page becomes interactive

Scripts using the afterInteractive strategy are injected client-side and will run after hydration. This is a great strategy for scripts that should run as soon as possible, but don’t have to do so before executing the JavaScript bundle. A perfect solution for tag managers and analytics.

lazyOnload – load during idle time

The lazyOnload strategy loads scripts during the idle time after the browser has fetched all other resources. This approach suits low-priority scripts that don’t require immediate execution, such as chatbots, cookie consent managers, or newsletter pop-ups.

worker – load in a web worker

This is the latest feature of web development, scripts utilizing the worker strategy are executed in a web worker using Partytown.

This allows us to offload the work from the main thread to a background one. One of the most common issues reported by Lighthouse is high main thread load, Partytown is a promising solution for that. 

However it is still an experimental feature since the library itself is still in beta, that doesn’t mean we shouldn’t use it, but we have to be cautious. I have successfully used it with Google Tag Manager and some cookie consent managers.

WANT TO IMPROVE WEBSITE PAGESPEED?

5. Utilize Code Splitting

Let’s say our web application contains fifty different sections. Usually, there will be a single component called Sections that takes an array of sections and based on their type renders them using corresponding components.

The problem with such a solution is that we’re always loading all of our sections into our bundle even though our page may only use a couple of them.

The old, inefficient way:

import Section1 from "@sections/section1";
import Section2 from "@sections/section2";
/*
* ...
*/
import Section49 from "@sections/section49";
import Section50 from "@sections/section50";
 
 
export const Sections = (sections) => {
   return sections.map((section) => {
       switch (section.type) {
       case "section1":
           return <Section1 {...section} />;
  
       case "section2":
           return <Section2 {...section} />;
  
/*
* ...
*/
 
       case "section49":
           return <Section49 {...section} />;
  
       case "section50":
           return <Section50 {...section} />;
  
       default:
           return section;
       }
   });
}

Luckily, next.JS Provides Us With a Useful Tool Called Dynamic.

It takes a function that returns a dynamic import of a component and returns a component that wraps the imported one. 

We can use the returned component just as we would use the imported one. The difference is that now the code of that component will be put in a separate bundle and will be downloaded only when the component has been mounted.

What that means is that if a page consists of four sections, only the code of four components will be included in the bundle, greatly reducing its size and improving the page speed rating.

To improve Google PageSpeed Insights score, try the new, optimized way:

import dynamic from "next/dynamic";
 
const Section1 = dynamic(() => import("@sections/section1"));
const Section2 = dynamic(() => import("@sections/section2"));
/*
* ...
*/
const Section49 = dynamic(() => import("@sections/section49"));
const Section50 = dynamic(() => import("@sections/section50"));
 
export const Sections = (sections) => {
 return sections.map((section) => {
   switch (section.type) {
     case "section1":
       return <Section1 {...section} />;
 
     case "section2":
       return <Section2 {...section} />;
 
     /*
      * ...
      */
 
     case "section49":
       return <Section49 {...section} />;
 
     case "section50":
       return <Section50 {...section} />;
 
     default:
       return null;
   }
 });
};

6. Optimize SVGs (Img Src vs SVG Markup)

SVG images are often used for the site’s logo in the header or as icons in different sections of a typical Jamstack website. Unfortunately, when a user uploads an SVG image to a CMS, the image is returned as a link to a CDN instead of the markup. Let me give you a couple of examples of why this might be an issue:

Let’s say you have a section with a list of features, each feature has an icon and some text associated with it – having 10 features means 10 extra requests for those icons. That’s not what we want.

Not only will it lower our lighthouse performance score but it is also going to result in a poor user experience. The user doesn’t want to see placeholder images for icons for the first second of the page load.

Be Careful With the Large Images. They Slow Down the Load Time of Your Sites.

Another issue is that if you’re using Next’s Image component (which you should be), while the image is loading, a placeholder is displayed in its place. This is a great feature for big images, like the background of a hero section, but not so good for a website’s logo in the header, which is usually the first thing the user sees.

Luckily, there’s a simple solution for both of these problems. We can fetch the markup from the CDN URL during the build (for example in getStaticProps) and pass it as a prop of our image component. 

We have to be careful about that though because inlining SVGs increases the size of the HTML which impacts the lighthouse performance score. It’s fine to inline 10 – 20 small images like the aforementioned icons, but we don’t want a big, complex SVG that weighs a couple of hundred kilobytes to bloat out the HTML size.

A util function that fetches the SVG markup from a URL:

export const fetchSvgMarkup = async (url) => {
 const result = await fetch(url);
 
 return result.text();
};

7. Utilize Next Image Component Priority Prop

In recent years browser support for preloading images has grown and is present in all most popular browsers. The image component does it all for us simply by setting a single prop – priority.

Preload an Image First and Optimize the Web Page Loading.

Preloading an image greatly increases the LCP (Largest Contentful Paint) score, which measures the time it takes to paint the largest element in the initial viewport. This is perfect for the background image of a hero section (that is present on almost every page of a Jamstack website) or for full-screen images.

Proceed With Caution

This feature has to be used with caution, setting the priority prop on too many images on a single page, may actually bring the opposite result and slow the page down!

It’s hard to determine whether the section will be visible in the initial viewport, so it might be a better idea to add a checkbox to the image model in the CMS with the information on when the user should check that checkbox.

“We ran tests on a site that uses JavaScript to lazy-load responsive images. Preloading resulted in images loading 1.2 seconds faster.”

https://web.dev/preload-responsive-images/

An example of priority prop usage:

<Image
 src="big_background.png"
 alt="The background image of a hero section"
 width={1920}
 height={1080}
 priority
/>

8. Use Resource Directives

Resource Directives tell the browser which resources it should load first. Doing this correctly speeds up the loading time of a page. There are three most important directives:

dns-prefetch 

DNS-prefetch tells the browser to set up a DNS connection. You should use it for all critical assets on external domains.

Use cases for dns-prefetch are the same as for preconnect, preconnect is a little more expensive operation and should be used for the most important links. Whereas dns-prefetch should be used for all others or as a fallback since the browser support is better than for preconnect.

preconnect

This does the same as dns-prefetch, but also performs TCP negotiation and TLS handshake. It should be used for external sources that you know the browser will need quickly.

A good example of using this directive is pre-connecting to 3rd party font providers like google fonts. Other use cases may be pre-connecting to a 3rd party provider with a stylesheet needed for some libraries, for example, some carousel libraries host the CSS needed in a CDN. 

Pre-connecting also makes sense for CDN that hosts images used on our website, for example, when using Sanity, all of the images are stored under https://cdn.sanity.io/, so pre-connecting will give us a great boost in performance.

preload

Preload tells the browser to load the specified file to the cache. It should be used for files that are required to display the page, such as a font that’s specified in a stylesheet.

Besides caching the resource, the preload directive is also a hint to the browser to load the specified file as soon as possible. Some use cases of preload would be:

Fonts defined in stylesheets

The font faces declared in css will start loading after the css has been loaded and parsed, preloading causes the browser to load those fonts immediately.

Images defined in stylesheets

Images referenced from a stylesheet won’t start loading until the CSS file is downloaded and parsed. Preloading those images will mitigate that issue. 

Example of using the preconnect directive for Google Fonts:

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />

9. Load Libraries Dynamically

Third-party libraries are great, but they can weigh quite a bit, which causes the website to lose precious pagespeed score. By default, the source code of those libraries is placed in the JS bundle fetched when the user visits the page.

Say for example we’re using Formik to handle the contact form on the bottom of a page or Algolia for searching. None of those features (or rarely) are used immediately after visiting the page. What we can do is use dynamic imports to load them later in a separate bundle. 

For the Algolia, we can import its source code when the user focuses the search input, during the time he’s inputting the query, the library will be ready. 

As for Formik, we can start fetching the library on the first user interaction with the page, like scroll or touch event.

Example of dynamically using a package using the scroll event:

useEffect(() => {
   const handleScroll =  async () => {
       const lib = await import("some-library");
 
       // Use the library, or assign it to a ref
   }
 
 document.addEventListener('scroll', handleScroll, {
   once: true,
 });
 
 return () => {
   document.removeEventListener('scroll', handleScroll);
 }
}, []);

10. Utilize Intersection Observer API


Leveraging the Intersection Observer API in Next.js applications offers a performant solution to the issue of unnecessary data fetching. By deferring API calls until a component enters the viewport, we can significantly reduce initial page load times and conserve bandwidth.

This is particularly effective for components that fetch large JSON payloads or rely on third-party APIs. Implementing this involves setting up an Intersection Observer to track the visibility of a component. When the user scrolls and the component becomes visible, the observer triggers a data fetch from the API. This on-demand loading, or ‘lazy loading’, ensures that data is only requested when needed, aligning network activity with user interactions.

Summary – Getting the Search Engine to Work For you With Next.js

As you see, you don’t need to go looking for a new hosting company, migrate your blog from WordPress, or do a complete SEO audit of your site to improve Google’s PageSpeed Insights score and website performance in general. As a developer, there are a lot of clever solutions you can apply to improve your website’s page speed, page load, and all the other Google metrics.

Are you a developer looking for smart solutions & ideas? Consider signing up for our newsletter for weekly updates!

Need Help in Improving Your Pagespeed Results?

Article link copied

Close button

Leave a Reply

* Required informations.