In the IT Monitoring space, it has become a requirement to have eyes on everything in your infrastructure and everyone has become used to the single pane of glass, API integration with drill-through capability and full stack service monitoring.
As a result, many specialist companies are punting complete visibility of your entire infrastructure and positioning their tools as the panacea to keeping an eye on it all, the entire time.
Looking at the features and capabilities of the more prevalent vendors out there, it appears to be realistic enough, but can one specialist tool really manage all of this in one go?
Is it possible? Yes!
Does is ever work? Hardly…!
Supposing you have the Rolls Royce of application monitoring tools, as soon as you start investigating last hop network latency on a per transaction basis to troubleshoot your customer portal’s performance issues – or something just as intricate, but relevant to your IT service – in most cases you will find a mundanely basic network error is actually affecting normal service delivery.
Most of the marquee application monitoring tools that you come across can see any level of detail into the most critical IT services.
Embarrassingly, but most often upon implementation, these tools end up pointing out bad housekeeping, like misconfigured DHCP or how network flows are being directed to discontinued IP addresses.
Despite the grand visions that we have for our IT environments, the ground level is not as stable as we expect or want it to be and will always be something that requires our attention.
One way of looking at it is the TCP/IP model of networking communications. Application monitoring tools are used to look at, troubleshoot and alert on the upper layer, as the name suggests, where transaction details can be decrypted for DPI – Deep Packet Inspection.
Following this the Transport, Internet and Physical Layers are the supporting communication layers and essentially constitute physical and virtualized network equipment, VLANs, Quality of Service Bands and their configurations – everything that the business applications need to serve the end users with information.
If this TCP/IP model is viewed as a tower of building blocks, which it does represent in many ways, it stands to reason that the foundational layers need to be in place and under control before the upper layers can be used to any effect.
These are areas and functions that need to be maintained.
Don’t take my word for it though, refer to any operational lifecycle or governance framework. Somewhere between the planning, design and operation of any service in IT, maintenance is required.
ITIL labels it as “Transition”, COBIT says “Review Effectiveness” and the Sintrex in-house methodology chose to call it “Verify”, but it still speaks to evaluating existing structures for effectiveness and performing maintenance where necessary.
But “If it isn’t broken, don’t fix it”, so unless something goes wrong and gets rectified, how would one maintain the lower layers of this tower?
There should be emphasis on the lowest level of the model continuously, but your focus can only move to the upper layers, provided that the current layers in focus mature into established processes of maintenance and upkeep.
This should ring true for anyone involved in networks, as the first port of call when assigning blame is invariably, the network. More trust in the network and higher visibility into the lower layers translates into less time you need to spend hunting basic errors.
And when an end user claims the ERP system is not working, IT support should first and foremost confirm that the physical network servicing the system is up and running.
If you can say with confidence that the basics are in place and the network is doing what it should, it enables you to build up from this foundation to view all the intricacies that depend on the network.
This is the level of confidence you should have in your network, before you should be able to put your trust in Application monitoring.