A guide to APM for the network manager

Has application performance management become a much needed discipline for the network manager? We examine the issues for IT departments

Applications have become more dynamic, multi-platform and compartmentalised, and altogether more complex. In this new world of the ultra-distributed composite application, the task of centralised Application Performance Management (APM) is increasingly crucial but also more difficult.

A problematic inconvenient truth

The fact applications are no longer static (or, as static as they were two decades ago) stems from several factors, including the way apps are deployed and consumed.

Cloud computing and social media have given rise to a breed of applications that constantly connect, update and in some cases (such as lightweight ‘disposable’ time-sensitive apps for events) renew themselves. On the back of this, software development evangelists have championed terms such as agile and continuous delivery to explain how making changes “little and often” could be the route to greater incremental efficiencies for users.

This is all theoretically good news for the ever-impatient user who wants something new and extra all the time.

For the network manager and the system administrator looking after the backend, this is like someone moving your jigsaw pieces around halfway through completion every morning.

Today it is possible to have 10 or more networked applications, running in different environments, involved in a single business transaction

Martin Ashall, senior director of presales for UK & Ireland, CA Technologies

Given most IT management systems require software agents to be installed on the devices that need to be managed, dynamic applications necessitate agents are also constantly updated. What’s to stop resource overhead costs of this kind escalating and the always-on APM world becoming a bottomless hole of expenditure?

Apps have changed, APM has changed

But APM development itself hasn’t sat around. If anything it has grown directly in line with the advent of networked cloud services, such that tools exist to provide controls in this treacherously shifting application and data landscape. So how did APM get networked and better connected?

The answer (or at least part of it) lies in the fact APM has become extremely intelligent in terms of what is instrumented at the network level, how that instrumentation is measured and when data itself is collected. This is the view of William Louth, founder of Dutch performance monitoring company Jinspired.

“Adaptive APM, a more modern approach, goes further by measuring its own overhead costs and relating it to the value of the data collected. Using dynamic budgeting mechanisms and value analysis techniques the new APM is now far more manageable in this regard,” he says.

“Because this measurement intelligence has now moved to the runtime (rather than residing at the point of installation and configuration), new APM solutions are better suited to continuous deployment.”

Read more about Application Performance Management

From his position working with so-called “adaptive control” technologies in this space, the Irish-born self-styled technology evangelist says APM has evolved to add value and offset costs by way of managing the application rather than simply monitoring it. It does this by applying techniques found in networking, such as quality of service (QoS) and adaptive control, within the application.

Louth explains that by managing applications using runtime agents embedded within the application, APM can not only optimise the application to reflect environment and workload conditions, but can also increase resilience by absorbing surges and sensing changes in other connected external systems.

So is networked APM is better APM?

Would it be fair then to say networked APM is better APM in 2014? Perhaps we can if we agree there is no such thing as a "single application" now, but rather a set of jigsaw pieces continuously re-used to create new services to cater to the needs of the business.

“Consumers see this reality on a daily basis; from internet banking to managing their mobile phone, or simply buying something online,” says Martin Ashall, senior director of presales for UK & Ireland at CA Technologies. “All those services consolidate data from multiple applications or interact with different applications to complete the transaction for the user."

“Today it is possible to have 10 or more networked applications, running in different environments, involved in a single business transaction.”

Ashall suggests APM has evolved over the years to cope with these changes. His firm’s approach monitors all running application instances and then tracks transactions as they move from one component to the other.

Knowing that you have a problem with the application between two points would normally mean throwing the problem over to networking to fix and a lot of companies still operate with a ‘it must be a network problem’ culture

Martin Ashall, senior director of presales for UK & Ireland, CA Technologies

This provides a so-called “real time view of the tube map” that joins the applications together. This way, it doesn't matter what the applications talk to today or tomorrow – operations always has a view of what’s really happening and the performance of the business transaction as a whole.

It must be a network problem culture

“It sounds great on paper, but you also have to think about the next layer down, the network itself,” says CA’s Ashall. “Knowing that you have a problem with the application between two points would normally mean throwing the problem over to networking to fix, and a lot of companies still operate with a ‘it must be a network problem’ culture.”

“As a result, you need to monitor the network in the context to the transaction flowing over it to help pinpoint the real root cause of the slowdown or errors.”

So from CA’s point of view, the world is moving to a combined view from multiple sources, intelligently displayed in both an application and network context. The truth here is borne out by the fact physical delivery and build of applications has become fundamentally different ie they have additional dependencies spread over ever wider geographical and organisational boundaries.

“As a result, the network has the potential to have the greatest impact on application performance and ultimately availability,” says Mike Hicks, senior product manager for network performance monitoring at Compuware. “This means it often provides a very efficient place to get that high-level view of what, how and where your users are.”

Hicks is quick to bemoan the ‘why is it always the network?’ complaints. He points out that today, the delivery aspect of applications across the network has become more entwined and this means demarcation points in terms of responsibility are no longer clear.

“This results in the network teams having to communicate in a common, more inclusive way so that an isolated view of network components is no longer sufficient to manage and maintain the required application delivery criteria,” he says. “This means that while statistics about specific routers, queue depths, servers or utilities that support these applications are certainly helpful, in isolation they are incapable of reliably telling you about application service levels.”

APM provides IT departments the much-needed visibility about which applications are running across their networks

Béatrice Piquer-Durand, Ipanema Technologies

The Compuware man asserts that effective network application performance management demands a level of understanding of application behaviour; better referred to as application fluency. Hicks says this understanding can then be applied to knowledge of network characteristics for provisioning network services, for setting performance expectations, for configuring devices and systems, for troubleshooting performance problems, and for qualifying changes to applications or networks from a performance perspective.

Do we need to redefine or extend APM?

The vice-president of marketing at Ipanema Technologies, Béatrice Piquer-Durand, talks about the need for something more than APM in the form of Application Performance Guarantee (APG) solutions, which goes beyond monitoring and into proactively securing.

“APM provides IT departments the much-needed visibility about which applications are running across their networks,” she says. “APG enables them to prioritise applications according to their business criticality, and to do this on an application-by-application basis.”

“This means apps such as SAP, Oracle etc can get top priority, while YouTube and Facebook will be ranked as lower priority. Once these objectives are defined, the system adapts and adjusts dynamically according to traffic and demand.”

So how does this application-by-application process work? Ipanema uses the Application Quality Score, or AQS. At its most basic, AQS classifies how applications are performing across the WAN through a "one to ten" metric. This means it allows IT departments to monitor applications as they stream across various geographical areas of the network, thus actively spotting and troubleshooting problems.

In more technical terms, AQS operates as a composite indicator by combining a variety of sub-metrics

Béatrice Piquer-Durand, Ipanema Technologies

AQS also allows companies to state in advance what the business-critical systems are. This way, recreational applications won’t be able to take up necessary bandwidth after a move to the cloud.

“By classifying how applications are performing across the WAN, AQS provides some basic numbers with big repercussions,” says Piquer-Durand. “These AQS numbers could help businesses streamline processes and minimise downtime.”

“In more technical terms, AQS operates as a composite indicator by combining a variety of sub-metrics (such as round trip time, server response time, transaction activity and TCP retransmits) and ‘one-way’ network metrics (such as transit delay, loss and jitter).”

More respect for the network

Can APM now be unequivocally classed as a network level discipline? The argument leans that way if you consider the breadth of tools aligning towards this space, the very "networked nature" of cloud in the first place and the inevitable application interdependencies we are now creating through service-based computing frameworks.

The "Oh it must be a network problem" cry might still exist, but perhaps it will be uttered with more respect now?

Read more on Network monitoring and analysis