How Should App Dev Teams Monitor End User Experience?
Updated · Jan 19, 2016
WHAT WE HAVE ON THIS PAGE
By Arun Balachandran, ManageEngine
Most businesses judge the performance of their Web applications based on the way these applications behave for their end users. In the case of business-critical applications, organizations need to ensure these applications perform well at all times. Monitoring the end-user experience of important applications is, therefore, pivotal from a business standpoint.
End-user experience monitoring, as most people define it, tracks how end users perceive an application’s performance. Although end-user experience monitoring sounds fairly simple, it is difficult to determine in reality. While Web applications are increasing in complexity, users are growing more and more demanding. To add to that, there is a proliferation of smart devices, such as tablets and smartphones, through which users access these applications.
Fortunately, there are a few methods available through which businesses can determine the user experience of their Web applications. Let’s take a look at three common approaches.
Synthetic Transaction Monitoring
Synthetic transaction monitoring is an active application monitoring technique based on the concept of simulating the actions of an end user on a Web application. This method involves the use of external monitoring agents executing pre-recorded scripts that mimic end-user behavior at regular time intervals. The monitoring agents are usually very light and do not create any additional load on network traffic.
Most application performance monitoring solutions provide recorder tools to capture the actions or paths a typical end user might take in an application, such as log in, view product, search and check out. These recordings are saved as scripts, which are then executed by the monitoring agents from different geographical locations.
Since synthetic transaction monitoring involves sending requests across the network, it can measure the response time of application servers and network infrastructure. This type of monitoring does not require actual Web traffic, so you can use this approach to test your Web applications prior to launch – or any time you like. Many companies use synthetic monitoring before entering production in the form of automated integration tests with Selenium.
Synthetic monitoring does have its limitations, though. Since the monitoring is based on pre-defined transactions, it does not monitor the perception of real end users. Transactions have to be “read-only” because they would otherwise set off real purchase processes. This limits the usage to a certain subset of your business-critical transactions.
The best approach is to use synthetic transaction monitoring as a reference measurement that will help identify performance degradation, detect network problems and notify in case of errors.
Real User Monitoring (RUM)
The data gathered through RUM provides answers to questions about user experience such as:
- How long did it take to load the full page?
- What is the response time from a network perspective (redirection time, DNS resolution time, connection time)?
- What is the time interval between sending the request and receiving the first byte of response?
- What is the time taken by the browser to receive the response and render the page?
- Are there any problems on the page? If yes, what caused the problem?
- How is the performance when the application is accessed from different countries?
- What is the response time across different browsers?
- Do new application updates affect the performance in a specific version of the browser?
- How does the application perform in different platforms such as desktop, Web and mobile?
The biggest advantage of monitoring real user data is that it relies on actual traffic to take measurements. There is no need to script the important use cases, which can save a lot of time and resources.
Real user monitoring captures everything as a user goes through the application, so performance data will be available irrespective of what pages the user sees. This is particularly useful for complex apps in which the functionality or content is dynamic.
Although user experience is best tracked at the browser level, application performance monitoring at the server side also provides insight into end-user performance. Server-side monitoring is mostly used in conjunction with real user monitoring. This is because problems originating on the server side can only be efficiently detected using server-side monitoring.
Monitoring performance on the server side involves agent-based instrumentation technology for acquiring and transmitting data. This monitoring approach is used to watch user transactions in real time and troubleshoot in case of issues such as slowness or application bugs.
Developers have to install agents on the application server to help capture and visualize transactions end-to-end, with performance statistics across all components, from the URL down to the SQL level. This visual breakdown reveals the flow of all the user transactions being executed in each layer of the application infrastructure.
Server-side monitoring helps track response time and throughput taken by each application component, with the option to trace transactions end-to-end via code analysis. This helps the IT operations/devops teams identify slow Web transactions and then isolate performance issues down to the level of the specific application code that caused them.
The underlying database is also monitored most of the time to determine slow database calls, database usage and overall database performance. With server-side monitoring, users will be able to identify the SQL queries executed during a transaction and thus identify the worst performing queries.
Every business is different and has its own requirements that can help to choose which type of application monitoring to implement. An ideal approach would be to choose a combination of active and passive application monitoring techniques so that no stone is left unturned in the pursuit to monitor end-user experience.
Arun Balachandran is a senior marketing analyst at ManageEngine, the real-time IT management company, and currently works for ManageEngine’s application performance management solution. He has a master’s degree in computer applications. You can follow the company blog at http://blogs.manageengine.com or follow ManageEngine on Facebook http://www.facebook.com/ManageEngine and on Twitter @ManageEngine.
Drew Robb is a writer who has been writing about IT, engineering, and other topics. Originating from Scotland, he currently resides in Florida. Highly skilled in rapid prototyping innovative and reliable systems. He has been an editor and professional writer full-time for more than 20 years. He works as a freelancer at Enterprise Apps Today, CIO Insight and other IT publications. He is also an editor-in chief of an international engineering journal. He enjoys solving data problems and learning abstractions that will allow for better infrastructure.