Get your copy of the book

Transforming the Purchasing Experience

Download
Ebook Retail Transformation Technology

Make Webpages Load Faster with Lazy Loading

  —  
 read

Media resources like images, videos, and iframes are a critical part of the web. Be it banners, product images, videos, or logos; it is impossible to imagine a website without them.

According to the HTTP Archive's State of the Web report, images account for more than 50% of the median webpage's byte size. At the 90th percentile, sites download about 4.7 MB of images on desktop and mobile. That's quite a lot of memes!

So, it's not like we can do away with them. Images are essential to the overall user experience, so we need to make our web pages load fast with them.

The concept of lazy loading

Lazy loading is a technique that defers the loading of non-critical resources during the page load time. Instead, these non-critical resources are loaded at the moment of need. "Non-critical" is often synonymous with "off-screen". We can apply lazy loading to about any resource on a page. For example, even a JavaScript file can be held back if it is best not to load it initially.

This approach is beneficial because:

  1. It doesn't waste data. Users with limited data plans can save data, and users with unlimited data plans can save precious bandwidth for more important resources.
  2. It saves processing time, battery, and other system resources. After a media resource is downloaded, the browser must decode it and render its content in the viewport.
  3. It saves money. Content Delivery Networks(CDNs) deliver media resources at a cost based on the number of bytes transferred. Reducing the total bytes delivered on the page can save a couple of pennies.

When we lazy load images, videos or iframes, we reduce the initial page load time, initial page weight, and system resource usage, all of which have a positive impact on performance.

Lazy loading in the browsers

The specification for web browser native support of lazy-loading landed in the HTML spec a couple of months ago. Now we can write something like:

<!-- lazy load by providing "lazy" value to "loading" attribute-->
<img src="image.png" loading="lazy" width="200" height="200" />

Looking at the current specification, it is vague on exactly when browsers are supposed to load a deferred resource. This ambiguity in the specification has created implementations with different user experiences. At the time of writing, Chromium Blink (Chrome), Mozilla Gecko (Firefox) and WebKit (Safari) have all implemented lazy-loading and each implementation has set different margins:

  • Blink sets the margin of 1250px on low-latency network connections, and up to 2500px on high-latency connections.
  • Gecko sets no margin at all resulting in images being loaded when at least 1px of it is visible to the user.
  • WebKit's implementation, although incomplete at the time of writing, seems to set its margins to 100px vertical and 0px horizontal. This gives the browser a small heads-up to start loading the image before it's scrolled into view.

The browser vendors have gone for different trade-offs between data-saving, perceived-performance, and how acceptable a temporary blank area is. These margins aren't set in stone and may change over time.

But the inconsistencies don't end here:

  • the auto value is not mentioned in the specification but is available (and is the default value) only in Chromium,
  • the lazy value with iframes just recently got in the standard, and it's available only in Chromium,
  • the loading property is currently supported only by 64% percent of the browsers.

This is something the web community should stand against: Applications whose end-user experiences depend on the browser of choice.

Even worse, web developers don't have a say in the matter.

To get more control over the above listed inconsistencies, let's try to implement lazy loading on our own.

The naive approach

Only images are used in the following examples, but these approaches can be reused on any other media resource or DOM element. Examples also imply that the user scrolls top-to-bottom, and images get loaded as soon as they enter the viewport.

With minor code modification, it is possible to create examples that may traverse the page in any direction, and a positive or negative margin on image loading can be added.

Let’s imagine we have a couple of images in an HTML document. Rather than defining a src attribute which will cause the browser to load the image immediately, let’s set a custom data attribute named data-lazy:

<!-- regular image -->
<img src="path/to/image.jpeg" />

<!-- lazy image -->
<img data-lazy="path/to/image.jpeg" />

We will read the data-lazy attribute when the image becomes visible in the viewport and use JavaScript to move it to the src attribute. Since we need to know about and react to the visibility of images on the page, let's listen to the scroll event:

const lazyImages = document.querySelectorAll("[data-lazy]");

window.addEventListener("scroll", event => {
 lazyImages.forEach(img => {
   const top = img.getBoundingClientRect().top;
   if (top <= window.innerHeight) {
     img.setAttribute("src", img.dataset.lazy);
   }
 });
});

First, we select all the images to be lazy-loaded using an attribute selector. After that, we set up a callback that will be executed whenever the scroll event fires.

Each time the scroll event fires, we iterate through the collection of images and check if any entered the viewport. Any image entering the viewport will get its src attribute set, which will trigger the browser to download and present the image to the user.

Note that both resize and orientationChange events are equally important since resizing the browser window or changing the orientation of a device might make an image enter the viewport. They are omitted for the sake of simplicity of the example.

The above code snippet has quite a few problems, the first one being efficiency. Scrolling fires off many events, and the browser will need to recalculate every element in the DOM each time. The second issue is iOS. Scrolling on some phones only results in the scroll event being fired after the scrolling has finished.

Thankfully, requestAnimationFrame (rAF) can help us with these issues:

const lazyImages = document.querySelectorAll("[data-lazy]");

function loop() {
 // The usual lazy loading business
 lazyImages.forEach(img => {
   const top = img.getBoundingClientRect().top;
   if (top <= window.innerHeight) {
     img.setAttribute("src", img.dataset.lazy);
   }
 });

 // This is where the magic happens!
 requestAnimationFrame(loop);
}

loop();

rAF tells the browser to execute the provided callback before the next screen repaint. By having the initial loop method call, we can witness a beautiful infinite recursion of 'self-scheduling' rAFs. A recursively scheduled callback can be executed from 0 to X times per second, where X is the display refresh rate (depending on the amount of work the callback does) but will generally match the display refresh rate in most web browsers as per W3C recommendation.

Most devices have a display with 60Hz refresh rate, so rAF will usually fire 60 times a second (if the amount of work in the callback is small enough). Remember kids: To iterate is human, but to recurse is divine.

This approach is more convenient than listening to the scroll fired hundreds or even thousands of times during scrolling. One further optimization we could do is to remove the image from the collection of lazy images once it gets loaded. However, this still doesn't scale well. Now, we have an infinite loop performing checks on images even though the user might not even scroll.

Also, calling getBoundingClientRect will trigger the browser to calculate the style and layout synchronously. This is also called reflow or layout thrashing and is a common performance bottleneck. A large number of DOM elements reflowing many times a second is going to cause jank and ruin the precious user-experience, especially for users browsing the web using low-end devices.

If only there was a way to offload all this work to the browser...

The sophisticated approach

Meet Intersection Observer API: a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document's viewport. Asynchronously is the keyword here: We let the browser do the heavy-lifting and let it notify us when the intersection happens.

At the core of the Intersection Observer API are the following two lines:

const observer = new IntersectionObserver(callback, options);
observer.observe(target);

By providing an options object, we can define:

  1. The area to observe with the root property,
  2. How much to shrink or expand the root's logical size when calculating intersections with the rootMargin property,
  3. Breakpoints for invoking the handler with the threshold property.

The default threshold is 0, which invokes the handler whenever a target becomes partially visible or completely invisible. Setting the threshold to 1 would fire the handler whenever the target flips between fully visible and partially visible, and setting it to 0.5 would fire when the target passes the point of 50% visibility (in either direction). It is also possible to define an array of thresholds to invoke the callback on multiple breakpoints.

Observers default to monitoring the browser's viewport if the options object is omitted or the 'root' property is not provided (or is null).

Observer handlers are callbacks that receive two arguments:

  1. A list of IntersectionObserverEntry objects, each containing metadata about how a target's intersection has changed since the last invocation of the handler.
  2. A reference to the observer itself.

and it looks something like the following:

function callback (entries, observer) => {
 entries.forEach(entry => {
   if (entry.isIntersecting) {
     // Do something
   }
 });
};

A single target can have multiple entries, each corresponding to a single threshold. It's often necessary to check which threshold triggered the callback by iterating over the entries.

The following code snippet represents our previous image lazy loading idea rewritten with with the Intersection Observer:

const lazyImages = document.querySelectorAll("[data-lazy]");

lazyImages.forEach(img => {
 // Callback which will get invoked once the image enters the viewport
 const callback = (entries, observer) => {
   // With no options provided, threshold defaults to 0 which results
   // in an array of entries storing only ONE element
   entries.forEach(entry => {
     if (entry.isIntersecting) {
       const img = entry.img;
       img.setAttribute("src", img.dataset.lazy);

       // Observer can be disconnected to further optimize efficiency and not
       // to trigger the callback when the image exits the viewport
       observer.disconnect();
     }
   });
 };

 // Create an observer for each image
 const io = new IntersectionObserver(callback);
 io.observe(img);
});

There is no need to perform any calculations (looking at you getBoundingClientRect) because the isIntersecting value stores the information whether or not the image is visible. Once the image is visible, we can disconnect the observer so that the callback doesn't get triggered as soon as the image leaves the viewport.

It is also possible to create a single Intersection Observer that observes multiple targets in order to save more resources. If n images are observed, at most 2n callbacks will be executed: once on page load (to check if the target is already in the viewport) and once the target enters the viewport. That is just crazy performant compared to the naive methods!

For a better understanding, check out below the visualization demo created by Michelle Barker.

Visualization demo by Michelle Barker

All the major browsers have supported it for some time now. As expected, Internet Explorer doesn't support it at any level, but there's a polyfill available from the W3C that takes care of that.

And yes, all the browsers that implemented native lazy loading used Intersection Observer under the hood.

Creating better user experiences with lazy loading

Avoiding Content Reflow

When an image is yet to be downloaded, the browser doesn't know the size it will take up. Then the enclosing container would have no dimensions if we do not specify it using CSS.

Once the image loads, the browser will drop it onto the screen. This sudden change in the layout causes other elements to move around and causes reflow. That isn't just unpleasant user experience but also a potential performance problem.

This can be avoided by specifying the height and/or width for the enclosing container. Later, when the image loads, since the container size is already specified and the image fits into that perfectly, the rest of the content around that container does not move.

Utilizing Placeholders

What if there is no way to know the dimensions of an image? Or if an image download takes a lot of time because of its size? An alternative could be to use a placeholder. A placeholder is an element that appears in the container until the actual image is loaded. One could pick a Generic Placeholder (low-quality image with a fixed color), a Dominant Color Placeholder (e.g. Pinterest) or even a Low Quality Image Placeholder (blurred version of the original, e.g. Facebook or Medium; can be seen in the following video).

Blurred version of the original image

The switch could be achieved by setting the src attribute to point to a placeholder then switching its value to the original image once it meets the lazy-loading criteria.

One problem with this approach is that we have to maintain at least two versions of an image. Another problem is that downloading the placeholders is still present on page load, but the data amount is smaller than loading the originals. Also, smaller placeholder images could be inlined as data urls to save up on network requests.

Nevertheless, it is clear that the transition from the placeholder to the actual image gives the user an idea of what is to come in place of that placeholder and improves loading perception.

Setting the margins

In the previous examples, we checked for the point of time where the image enters the viewport. The problem with this approach is that users might scroll really fast through the page and the image will need some time to load and appear on the screen.

Instead of loading the image exactly when it enters the viewport, some margins could be added. This provides additional time, between the load trigger and the actual entry in the viewport, for the images to load. This margin could also be dynamically adjusted depending on the user's device (using the User-Agent Strings) or even using the Network Information API. Smaller margins could be set for users with good network and performant devices since they can download, decode, and render resources faster than users with low-end devices.

Not everything must be lazy-loaded

Like everything in life, it is possible to have too much of a good thing. Lazy loading might reduce the initial page load, but it also might result in bad user experience if some resources are deferred when they shouldn't be. Resources present in the viewport on page load should probably not be lazy-loaded. Mobile and desktop devices have different screen sizes and a different number of resources will be visible initially, which should be taken into account.

Also, what is the point of lazy loading if a page is too small, and there is nothing to scroll?

The JavaScript dependency

This entire idea of lazy loading depends on JavaScript being enabled and available in the user's browser. Most of the users will likely have JavaScript enabled, but it is always good to plan for cases where it is not.

An option is to use the <noscript> tag. But that is as good as it gets and is certainly a story on its own. Check this Stack Overflow thread to get some clarity on this subject.

Also, as an anti-tracking measure, the native lazy loading does NOT get deferred when JS is enabled. This is just another reason to ditch the native implementation.

What about SEO?

I can already hear someone in the background yelling: "But what about SEO, Googlebot can't execute JavaScript... and it won't scroll your page."

As of May 2019, Google anounced that Googlebot will regularly update its rendering engine to ensure support for latest web platform features. And with that, support for Intersection Observer landed! Google is not too specific on how their crawler can index the images (since it doesn't scroll), but it seems it manipulates the IntersectionObserver API to populate the content of the page using it.

Increase loading performance, reduce the overall page size

We covered a lot of ground about lazy loading. If implemented well, it can significantly benefit the loading performance while reducing the overall page size and delivery costs, thanks to deferring unnecessary resources upfront.

The hype around native lazy loading was quite big, but the specification is vague, and each browser vendor implemented it in its own way. This brings inconsistent user experiences and lowers the trust we have in the applications we build. As developers, we should not demand such out-of-the-box features from our platform. It is tough to create a fully-functioning, unified, and consistent solution for such a complex problem.

Instead, we should demand optimized low-level APIs (like Intersection Observer) so as to elegantly solve complex problems and create scalable solutions (like lazy-loading).

What it would look like if the moon lazy-loaded, interpretation by Mario Kovačević.