RUM vs Synthetic monitoring: how to monitor the performance of your web app?

by | Apr 29, 2021 | Application Performance, Articles, Digital Experience

How to monitor the performance your web app: is RUM (Real User Monitoring) or Synthetic monitoring the best method? 

Boris Rogier

Boris Rogier

Co-founder

Real User Monitoring - RUM vs Synthetic Monitoring: what's the best way to monitor web performance?

There are many ways to monitor how fast your web app or site is rendered to your users. “What’s the best to monitor your web performance?” is a common question. We hear two very common answers: Synthetic testing and Real User Monitoring (RUM). What’s the best for you? 

First, all web developers do some form of performance testing before going live or releasing their latest shipment. The questions we would like to cover here are:

  • Depending on what your web platform consists of, the profile of your users and how critical web performance is, how should you monitor it? 
  • What can you expect from each type of experience monitoring tool (synthetic testing and RUM – real user monitoring)? 
  • The pros and cons of each method?

Let’s start with the simplest web performance test approaches

Chrome DevTools

Any web developer or troubleshooter has once used either Firebug (F11 key on Firefox) or WebDevTools inside Chrome. That view gives you a clear picture of how the page is loaded on your machine and offers some options like disabling the cache, simulating different network conditions, and displaying with multiple screen size options.

This is definitely a great start as it shows:

  • Overall loading times (LCP, FCP, PLT, …)
  • Weight of the overall page and the size of each resource, how it is compressed (or not)
  • The rendering path and the eventual long tasks and slow resources

The downside of this is that it shows all of the performance based on: the developer / tester’s machine resource (RAM for the processing of scripts for example) and the connectivity and geographical location of that machine.

There is a significant risk that real users of the app have less resources, smaller screens, poorer connectivity etc.

And finally these tools provide lots of data but they are not that easy to interpret. 

You will hereunder a screenshot of the Network view of DevTools:

Chrome DevTools Waterfall view

PageSpeed Insights

There is a second approach which consists in testing from another location using a free or paid service. In that case, the testing can be done from a remote location; sometimes the service can be offered from a multiplicity of locations to evaluate the impact of that difference on the web performance perceived by users.

A good example of these services is PageSpeed Insights from Google. 

The limitation of that approach is that it still performed from location with great connectivity (i.e. machines hosted in the cloud or in datacenters with relatively direct access to backbones) and these machines have a relatively large set of resources to render the page.

While these 2 approaches represent good practices, they have a common drawback: these are manual tests conducted on demand; they do not represent the changes applied to your app, the changing conditions of public infrastructure between your users and your hosting platform.

You will see hereunder a screenshot of the results of PageSpeed Insights:

Google Page Insights to test the performance of a web page

Synthetic monitoring for web performance

Synthetic testing consists in replaying user scenarios on a regular basis and monitoring the proper execution of the app and the response times. It can be taken to a large scale by multiplicating the number of locations from which the web application performance is tested.

It can also offer multiple test options like the choice of the user device emulated or the browser used to execute the test.

Main benefits

  • Replicable tests make the measurements easy to interpret (same location, same pages tested, etc.)
  • The user scenario can represent a comprehensive set of user actions which correspond to a business transaction.

Main challenges

  • Synthetic testing requires the configuration of the test scenarios. Configuring the right test scenarios (enough to represent everything which is worth monitoring and not more than you can maintain in the long run)
  • Maintenance: if your application is evolving fast, so should your tests. This is a second pitfall.
  • Synthetic is not great at providing a clear path to the root cause in case of complaint from users; it is more of a proactive alerting system that warns you when something is broken.
  • Despite all the testing options, you will find it difficult for your synthetic testing to reflect the variety of users and transactions that can be observed on your app.

Synthetic Monitoring Kadiska's deployment as an example

Real User Monitoring

Real User Monitoring consists in collecting performance analytics from your users’ devices that reflect all the transactions performed on your app.

Here is a short article that describes how R.U.M. works. In a nutshell, a script that you integrate in your HTML code gives the instructions to your users’ browsers to send the performance data available through the W3C APIs to your Real User Monitoring platform.

Main benefits

  • Full understanding of your audience and user profile (geographic location, connectivity, device, browser, …)
  • Complete visibility of how experience (response times / errors) tie into user profiles.
  • Great at breaking down user performance to isolate the origin of degradations.
  • Shows 100% of the transactions, with no configuration required –> implementation and maintenance is easy
  • Also works for complex simple page applications

Main challenges

  • Data volume: you get a lot of data, but you have to store it!
  • So much data that you can pivot in many different dimensions (time, location, connectivity, device, browser, transaction etc.) makes data interpretation somewhat more complicated than reading a simple time series.

Real User Monitoring Kadiska's deployment model as an example

How should you most monitor what? Synthetic vs Real User Monitoring

 

Synthetic Real User Monitoring
How
  • Test platform replays scenarios and reports on execution times and eventual errors
  • A script instructs users’ browsers to share performance analytics corresponding to all transactions made.
Pros Easy to interpret

Great for proactive alerting on simple set of pages / transactions

Instant implementation

Complete visibility of all the audience

Great for troubleshooting and looking for optimizations

Great where scenarios are hard to develop or have insufficient coverage of user activities (SPAs)

Cons Limited visibility on real usage and experience

Scenario maintenance

Harder to interpret

More data volumes to store

Adequate use cases Relatively simple web sites

Proactive alerting on simple set of pages to test

SPA

Complex implementation

Heterogeneous user profiles

Kadiska’s strategy: combining the best of both worlds!

Kadiska’s vision is to combine Real User Monitoring and Synthetic monitoring:

  • RUM to have a full visibility of an application’s audience, understand the drivers of the digital experience, transactions and user profile where it is satisfactory or not and isolate the factor why it is not good enough (which part of the platform: app, CDN, 3rd party, transaction and what’s driving the poor response – network, server processing, data transfer, queueing, etc…)
  • Intelligent synthetic testing to test your app, the network and cloud infrastructure to get to the root cause and know how it can be fixed. 

Take a look at Kadiska! 

Share this post

Newsletter

All our latest network monitoring and user experience stories and insights straight to your inbox.

Resources

Kadiska is now part of Netskope
This is default text for notification bar