If you own or manage an ecommerce business, you’re often thinking about increasing sales by improving conversion rate. Optimizing site speed, when done correctly, can be a great way to improve this metric. However, analyzing web performance accurately is a bit overwhelming, so we put together a guide based on our internal auditing process to help you determine if speed is an issue worth solving for your site.
Our auditing process is broken down into 4 key steps:
There are multiple tools you can use to analyze your site speed, though most of them utilize lighthouse, and then display the information in their own way.
For example, PageSpeed Insights, GTMetrix, and Shopify Speed Score, all use lighthouse to generate their overall performance score and analysis. Surprisingly, the same website on the same day can look very different for all three tools.
Since these lab measurements are often inconsistent and based on a throttled (slower) simulation, we recommend using real user data to analyze your site speed instead. This data is far more important, because it represents how shoppers and customers are experiencing your site. By focusing on improving this data, you’re much more likely to impact important KPIs, like bounce rate, conversions, and average order value.
Additionally, this is the same data Google looks at when providing a web performance ranking benefit to your website. Improving your overall performance scores or individual lighthouse metrics have no impact, so don’t get caught chasing these numbers or purchasing cheap services that promise instant boosts to these scores.
The best tool for looking at real user data quickly, is PageSpeed Insights. If the site you’re analyzing has enough traffic, you’ll be able to see real user data at the top of the report.
At a bare minimum, you want to see “Origin” data, which is gathered from visitors across the entire site, not just “This URL”, which is the data for whichever URL was analyzed. If the tab defaults to “Origin” when you analyze a URL, then that means there’s not enough data for the specific URL analyzed, like in this example:
If the site doesn’t have enough traffic even for Origin data, then PageSpeed Insights will look like this when analyzing:
Typically, this would mean the site is doing under 5,000 sessions per month. At this size, it’s better to focus on driving more traffic, rather than worrying about site speed.
Likewise, if you only see Origin data, it might be better to focus on driving traffic.
Once you can see data for at least the home page URL and sitewide, then site speed becomes more relevant. At this point, the site is likely doing 20-50k sessions per month. After 50k sessions, site speed becomes a no-brainer for ROI, and between 20-50k, it’s good to lay a solid foundation of great web performance.
Assuming there’s enough data, we recommend looking at the following real user data (mobile and desktop for each if possible):
The most visited collection and product pages are the most likely to have enough real user data to show up in PageSpeed Insights. Typically for ecommerce sites, collections and products have their own template, so if you have enough data on one page, you can infer the user experience on all the others.
Here are some example screenshots taken from a site for the mobile data of the above mentioned categories:
With Chrome DevTools (clicking inspect on a page in Chrome), you can throttle speeds under the Network tab and turn on a Core Web Vitals display under the Rendering tab. This allows you to see how the page loads more easily, and how the measurements are affected as various elements load.
For example, we can take a closer look at the Product page CLS issue we identified using this method - notice how CLS increases as new content loads in and shifts down previously loaded content...
Additionally, you can analyze the URLs with WebPageTest to get a detailed report of which element is registering as the LCP and which elements are causing CLS. It’s also a helpful tool for comparing performance before and after changes are applied, which we’ll talk about further down in this guide.
For the above example, here were some of the action items we uncovered.
High TTFB only appears to be an issue on product pages, so it’s like a 3rd party / feature specific to product pages causing this issue.
The page should be profiled with Shopify Theme Inspector to determine the source of higher than normal Time to First Byte (TTFB). This is usually caused by slow loops and conditional statements in Liquid. These can be fixed by loading the data with JS via Shopify’s API instead.
We found anti-flicker code, which creates a blank layer over content for a brief period of time, thus hiding content as it loads (often used during AB tests like with Google Optimize).
This technique can eliminate CLS, but only masks the issue, rather than solving it. Additionally, hiding the content makes FCP slower, and even LCP slower, depending on when the anti-flicker goes away.
If this code is being generated by Google Optimize (most likely) the simplest solution is to remove it when not in use, or, make it conditional to only the page(s) being AB tested.
We found this particular site was loading two images variations, for almost every image. First, a low quality image loads, which can improve how fast a user sees something, but then a loading animation triggers before the larger high quality image loads, which is then re-measured as the LCP.
The trade off here is that a user sees the image faster, but it’s low quality, and it slows down being able to see the high quality image, which also impacts LCP and Google’s web performance ranking boost.
In our opinion, this slight benefit isn’t worth the downside, so we’d recommend removing the loading animation, and only loading the higher quality image.
Some CLS issues on product pages are caused by the two image scenario above, but the majority of CLS issues come from the way galleries load in addition to the informational area containing the product variations, buy buttons, description, etc (as shown in our Chrome DevTools example).
Setting predefined heights for the gallery and informational sections will help eliminate CLS, in addition to displaying a static product featured image, before showing the full gallery once loaded. Similar to the anti-flicker approach, but using the product image the user actually wants to see, rather than a blank white space over the entire viewport, while waiting for the gallery to load.
Once you’ve gathered some potential optimization ideas, we recommend the following approach to ensure performance improves.
Pages should be optimized in order of product > collections > home:
1) Observation: Gather Baseline Data
Use tools like PageSpeed Insights, Search Console, WebPageTest, and Chrome Dev Tools to gather real user data, screenshots, and videos of how the site loads before making changes.
2) Hypothesis: Diagnose
Pick one action item at a time from the provided ones in this audit. Making multiple changes at once will make it difficult to determine which, if any, had an effect.
3) Test: Optimize
Make the changes in a preview theme or dev version of the site. This will prevent anything from accidentally breaking on the live site.
4) Analyze: Gather Data Again
Run the preview theme through a tool like WebPageTest or inspect with Chrome Dev Tools to see if lighthouse metrics have improved.
PSI real user data and Search Console update on a rolling 28-day basis, so these will only change once code changes are live for an extended period.
If the test showed improved metrics and the site functionality wasn’t compromised, publish the changes and move onto the next action item.
If the test did not show improved metrics, first check the test was conducted properly, then that the code change was done correctly. If both of those hold true, then you may need a different hypothesis / action item to achieve the desired result.
Optimizing web performance is not a one time fix. Growing sites face consistent 3rd party updates, content changes, and new feature development that leads to degradation of performance over time. Automatic optimizations and monitoring helps maintain good performance.
Though you can find actionable items to optimize manually from the above methods, we often recommend automated approaches when possible, for 2 key reasons.
1) Many web performance optimizations are difficult to implement and maintain manually, and some simply can’t be done manually. For example, detecting which 3rd parties should load in which order for each page, for the best possible experience, can be difficult to iterate on and test without automation.
2) Sites are always changing - content is regularly updated, 3rd parties push updates, platforms like Shopify make updates, theme make updates, browsers update, and code changes are regularly made by teams for requested features or design changes. All of these have the possibility of reducing web performance unintentionally. Monitoring and correcting these dips in performance is very time consuming and difficult to do manually. Thus, automating both monitoring and correcting for these events is much more efficient.
Our technology solves this exact problem. Render Better optimizes Shopify sites automatically, monitors for performance issues over time, and we can also provide solution engineer hours for tricky issues that require custom solutions.
If you’re interested in giving Render Better a try to optimize your site now and future-proof your web performance with automation and monitoring, then click here for a free audit to see if your site would be a good fit.