On 29 March 2017, Browsium hosted a webinar: “Reaching peak web application performance with Browsium”. We had a large group participating in the live webinar, generating a number of great questions about the unique solution Browsium provides for optimizing productivity and the end user experience for your LOB web applications. We have compiled the complete set with answers to share with all attendees and those seeing the webinar for the first time here in this post. If you missed the live event, you can watch the video archive on YouTube today (or use the embedded video player above).
Read on to see the questions (and our responses) from the webinar.
You talked about how other management tools don’t offer insights to browser performance, but our existing tools highlight browser process data. What did you mean by that?
A: We get that question a lot. What we’re talking about with Proton is the ability to ‘go inside’ the browser process and give a much richer and more granular view of the activity and usage. Existing tools are going to tell you about the EXE, which is great info for non-browser activity. But the browser simultaneously loads so many different pages, content and files acting in various ways that just looking at the browser as a single EXE blends way too much information into any usable format. Our goal is to help IT understand the operational specifics of the browser, enabling you to get to the bottom of the performance issues in an organized and efficient manner. We’re highlighting and providing deep dive access into the granular details all in one place so you can find, isolate and then take action to resolve the problem.
Some of the performance data you showed is interesting, but our server teams can generate reports on usage and volumes. How is this different than what they already report?
A: Great question. The biggest difference in the reporting is going to be viewpoint. Knowing how many connections are being made to the server is helpful – to the server team. It provides no information on client performance. It also provides no information on what the actual web application experience is to the end user(s) where it really matters. It means nothing if the server capacity is well under threshold but the end user(s) are experiencing massive load waits because of how that web application is written – maybe a script error or Java load. Or worse, that one web application being slow is impacting other web apps in other tabs for the end user(s). Just looking at server usage and performance stats is great for the team responsible to ensure the health of those boxes, but it ignores the larger pieces of the browser management picture which is where the users live.
How does your trending performance chart really help us versus monitoring alerts that tell us when a system is down?
A: I’d say the main issue is about being reactive and dealing with ‘false positives’. It’s also an issue of looking for ‘slowness’ versus ‘outage’. The goal with Proton is not to be a system outage monitoring tool. That said, from our view, getting an alert when a system is down can yield a root cause of many issues, including a simple ‘hiccup’ of the network between the monitor and end point. It could be nothing. It could be that the issue was a real outage, but where is the outage? Somewhere between those two points…maybe it’s in the network, maybe it’s the server, etc. When we look at performance monitoring in Proton, we’re interested in understanding trends and using that information to highlight real issues. A simple ‘hiccup’ here or there will smooth itself out in the trended data – we’ll still see it in the details, but as we showed in the demo the trends are what we’re highlighting and what are important. In terms of ownership and resolution, the value of the trended charts are that we know the issue is not only on the server, but really in the web application tier itself. The information Proton provides will enable a faster approach to issue identification and resolution. That’s our goal.