My latest project —, a content site with near-zero costs and near-100% PageSpeed: Part 2

Timothy Teoh
6 min readJan 24, 2021


In part 1 I wrote about the challenge of minimizing costs for content websites, in this piece I focus on maximizing performance.

Frontend development is a curious beast. Much has been said of the proliferation of Javascript libraries and frameworks — my favourite quip (I can’t remember the source) being that if you pick a word from the dictionary and add .js to it, a Javascript package of that name probably exists.

The mainstream solutions I’ve seen on the frontend coalesce on the following, to solve what’s seen as the main problems:

  • Interactive components, state management, and routing: A framework like React (or Vue or similar) is used. The main differentiator is developer experience.
  • Server-side rending and universal components: NextJS (or NuxtJS or similar) or equivalent self-tooling. The goal here is to ensure a page looks and acts the same whether accessed directly, or when transitioned to from another page — the prevailing wisdom being that users prefer if your site doesn’t “reload” as the user navigates it.
  • A UI framework of choice: The consideration here is how good the components look and how much time they save developers. It is very common today for every interactive component on a given site to be from a third-party UI framework, because it saves development time.

A widespread performance challenge

Image credit:

Google has taken the lead in measuring website performance with their Lighthouse Pagespeed Insights (PSI) tool, which measures against a set of metrics known as the Core Web Vitals. Both “lab scores” (from a simulated device) and real-world scores (from actual Chrome users, if you have a large enough sample) are available. CWV will be a ranking signal in Google Search this year onwards, which makes it essential knowledge.

It’s a lot to digest, but once you read the details on how exactly PSI and CWVs work, you realize does a good job of measuring pain points from the user’s perspective. I’ve worked on many different use-cases, but let’s take the case of an imaginary user, a reader who is interested in cooking:

  • First Contentful Paint (FCP) and Largest Contentful Paint (LCP) describe how fast above-the-fold content loads for a reader. Slow infrastructure, unoptimized images, and heavy pages penalize this score.
  • First Input Delay (FID) and Total Blocking Time (TBT) describe how long it takes for a page to respond to user interaction. Heavy-handed use of third-party tags like ads and trackers, and too much scripting on a site because of feature bloat (e.g. sign-up forms, referral tags, expensive image carousels) are all too common ways to get a bad score here.
  • Cumulative Layout Shift (CLS) measures how much a site’s layout shifts while a user is trying to browse it (it ignores layout shift caused by user interaction). Feature bloat is a common culprit here: many sites constantly load in new components that have nothing to do with the main recipe, causing elements to jump around.

The disconnect between developer experience and performance

Many teams fall into the trap of prioritizing developer experience and “productivity”. The problem here is that unlike downstream/backend services, the performance of your site on browsers is not something you can buy your way out of. If you have a slow API or database, you may have the option to:

  • Put a cache in front of your backend
  • Scale out horizontally/vertically and optimizing later

You don’t have this option when it comes to browsers.

Today’s modern frontend toolset abstracts many concerns. This makes for a better developer experience and productivity at first, but makes for a incredibly steep learning curve when it comes to understanding the factors affecting performance — a big challenge for budding developers and smaller teams. Going back to a list I made earlier on:

Interactive components, state management, and routing:

The ease of developing interactivity can lead to feature bloat and multiple event listeners on everything (just because you can, it doesn’t mean you should). My personal pet peeve is React sites that override normal link clicks to provide a custom page transition.

The list of event listeners on a single link on a React site

In the early days of React, it was commonly used only for parts of the page that required interactivity. It is more mainstream today to use it as a single-page application (SPA), which means that if you’re not careful, your site can end up relying on scripting to do even simple things like loading images or open links!

Server-side rendering and universal components:

A popular way to serve a React site today is to server-render content, and then proceed to attach scripting and interactivity (the “react” part of React) to components.

Before React, the main challenge in browser performance lay on the server: how fast your server could render pages and send down content over the network. Today, hydrating and mounting components on the browser (again, something you can’t throw money at to fix) is a huge bottleneck on complex pages with no easy solutions in sight. Google “partial hydration” and you’ll see that there are no standard ways to implement it — and many developers may not even know what hydration means!

A UI framework of choice:

UI component frameworks make developing interactive components easy, but because they are developed to cover common use-cases, you will end up with functionality you didn’t need. Because the UI framework will end up powering large swathes of your code, it can be a challenge to stay on top of performance updates.

I liken UI frameworks to making it easy to build a campervan with every feature you can think of — the challenge comes when the campervan also needs to go from 0 to 100 in five, like a Ferrari. Performance is also a feature.

For, less is more

I talked in part 1 about using GatsbyJS. GatsbyJS by default builds pages according to the “server render, then hydrate and mount” model I mentioned earlier.

For all the benefits of Gatsby, the “hydrate and mount” step React required by default meant I was able to achieve a Lighthouse score of only ~60, even with highly optimized thumbnails and fast infrastructure.

I decided then to drop React entirely from the browser, meaning doesn’t use React for any client-side functionalityReact is used to render the page HTML, but React and Webpack don’t get loaded on the browser. Instead, I load AMPHTML components in the browser for the few elements that are interactive. Accelerated Mobile Pages is a framework for Google that is targeted to mobile devices; AMPHTML is an extension of it that works on all devices.

The result was a massive increase in Lighthouse scores. There’s a certain sense to this: like I mentioned above, the prevailing wisdom is that you need scripting to open links and avoid reloading for “smoother” page transitions — but you don’t need that if your pages load fast enough. And because is powered by static pages, every page loads quickly and reliably.

As a reminder of just now much loading too much Javascript affects site speed (and how easy it is to accidentally do it), the popular comment plugin Disqus weighed in at ~500kbs and dropped the Lighthouse score by ~20 when implemented! I ended up scripting it so that comments only load when a user scrolls down.

In summary

  • If your use-case is one where content isn’t updated frequently (less than once an hour, say), using static site generation will give you a reliable, fast, scalable infrastructure at a tiny fraction of the cost.
  • Rethink what you really need to be interactive on a page. On a content site, my bet is that your readers want to read and navigate content, not to click on things. The less interactivity you have, the less of a challenge it will be to optimize that interactivity. Performance is a feature.



Timothy Teoh

Full-stack software architect and technology leader from Kuala Lumpur, Malaysia