Performance Monitoring: RUM vs. synthetic monitoring

Synthetic monitoring and real user monitoring (RUM) are two approaches for monitoring and providing insight into web performance. RUM and synthetic monitoring provide for different views of performance and have benefits, good use cases and shortfalls. RUM is generally best suited for understanding long-term trends whereas synthetic monitoring is very well suited to regression testing and mitigating shorter-term performance issues during development. In this article we define and compare these two performance monitoring approaches.

Synthetic Monitoring

Synthetic monitoring involves monitoring the performance of a page in a 'laboratory' environment, typically with automation tooling in a consistent as possible environment. Synthetic Monitoring involves deploying scripts to simulate the path an end user might take through a web application, reporting back the performance the simulator experiences. The traffic measured is not of your actual users, but rather synthetically generated traffic collecting data on page performance.

An example of synthetic monitoring is WebPageTest.org. It is done in a controlled environment where variable like geography, network, device, browser, and cached status are predetermined. It provides waterfall charts for every asset served by the host and CDN as well as every third party asset and asset requests generated by all third party scripts, such as ads and analytic services.

Controlling for environmental variables is helpful in understanding where performance bottlenecks have been occurring and identifying the source of any performance issues. For example, but it isn't reflective of the actual experience of users, especially the long tail.

Synthetic monitoring can be an important component of regression testing and production site monitoring. Test the site at every stage of development and regularly in production. Changes from baseline performance as part of continuous integration should fail a push. If an issue arises in production, synthetic monitoring can provide insight, helping identify, isolate, and resolve problems before they negatively user experience.

Real User Monitoring

Real User Monitoring or RUM measures the performance of a page from real users' machines. Generally, a third party script injects a script on each page to measure and report back on page load data for every request made. This technique monitors an application's actual user interactions. In real user monitoring, the browsers of real users report back performance metrics experienced. RUM helps identify how an application is being used, including the geographic distribution of users and the impact of that distribution on the end user experience.

Unlike Synthetic monitoring, RUM captures the performance of actual users regardless of device, browser, network or geographic location. As users interact with an application, all performance timings are captured, regardless of what actions are taken or pages viewed. RUM monitors actual use cases, not the synthetic, assumed use cases predefined by an engineer, PM, or marketing team. This is particularly important for large sites or complex apps, where the functionality or content is constantly changing, and where the population accessing the application may differ greatly in life experiences from those creating it.

By leveraging RUM, a business can better understand its users and identify the areas on its site that require the most attention. Moreover, RUM can help to understand the geographic or channel distribution trends of your users. Knowing your user trends helps you better define your business plan and, from a monitoring perspective, allows you to identify key areas to target for optimization and performance improvements.

RUM v Synthetic

Synthetic is well suited for catching regressions during development life cycles, especially with network throttling. It is fairly easy, inexpensive, and great for spot-checking performance during development as an effective way to measure the effect of code changes, but it doesn't reflect what real users are experiencing and provides only a narrow view of performance.

RUM, on the other hand, provides real metrics from real users using the site or application. While this is more expensive and likely less convenient, it provides vital user experience data.

Performance APIs

There are many monitoring services. If you do want to roll your own monitoring system, take a look at the performance APIs, mainly PerformanceNavigationTiming and PerformanceResourceTiming, but also PerformanceMark, PerformanceMeasure, and PerformancePaintTiming.