Programmatically Interact with RUM data
Real User Measurement (RUM) can significantly help you manage the quality of your releases, by providing near-immediate feedback on site health. One of the most powerful ways to leverage this feedback is through API calls. Most RUM platforms offer API access to their system for a variety of tasks and queries. When considering RUM, look for a platform that support using API calls to create and update data collection configurations, to add annotations to the RUM data, and to query the RUM data on demand.
You can leverage APIs to automate tasks during builds and releases. The most valuable use of APIs is to annotate RUM data when you push out changes so you can later track changes back to the specific change/release that might be responsible. Another valuable use, but one that is more complex to deploy is to use RUM data to validate that your site performance is still meeting target levels after the release.
Breadcrumbs for Diagnostics
Annotations are like breadcrumbs for diagnostics. Not every issue introduced during a release will be immediately obvious. Some performance issues may take a while to identify, either because they affect some smaller subset of the visitor population, because the degradation is only in one key performance timers, or because the degradation is subtle and only something that can be identified with a large sample size. Whatever the problem, if the regression can be traced back to a point in time, and that point in time matches an annotation in your RUM data that says “we pushed out a release”, you will likely save a lot of time and energy tracking the source of the issue.
Annotations should not be limited to externally-visible site changes, of course. You should use API calls to annotate RUM data for all meaningful changes in the delivery architecture. This should include updates or changes to servers, load balancers, web application firewalls, and other components even if the end user would not see a change in page design or content. You might consider annotations for major business cycle events, such as the beginning and end of major sales or promotional periods as well.
Integrating mPulse into Release Pipelines
The next step beyond simply annotating the RUM data with information about changes is to leverage the RUM data itself to make informed decisions about the health of the build process. With API calls, most RUM platforms will let you pull summary data about the health of your web site. This gives you the opportunity to do a spot check shortly after a deployment to see if the performance of the site has regressed. Or, at the very least, archive a set of performance benchmarks with each release for later manual analysis and consideration.
The most advanced users of RUM in a release pipeline might even fail and rollback a build if the performance numbers are not up to specification. Even if you are not ready for that level of integration, archiving a performance benchmark, and perhaps automatically reporting the performance numbers to critical team members as a checkpoint in the release can help build a culture of performance within your organization. Knowing that RUM will keep everyone honest when it comes to performance will ensure that performance is valued within your teams.