March 2019

Application Monitoring – still haven’t found what you’re looking for?

In the IT Monitoring space, it has become a requirement to have eyes on everything in your infrastructure and everyone has become used to the single pane of glass, API integration with drill-through capability and full stack service monitoring.

As a result, many specialist companies are punting complete visibility of your entire infrastructure and positioning their tools as the panacea to keeping an eye on it all, the entire time.

Looking at the features and capabilities of the more prevalent vendors out there, it appears to be realistic enough, but can one specialist tool really manage all of this in one go?

Is it possible? Yes!

Does is ever work?  Hardly…!

Here’s why…

Supposing you have the Rolls Royce of application monitoring tools, as soon as you start investigating last hop network latency on a per transaction basis to troubleshoot your customer portal’s performance issues – or something just as intricate, but relevant to your IT service – in most cases you will find a mundanely basic network error is actually affecting normal service delivery.

Most of the marquee application monitoring tools that you come across can see any level of detail into the most critical IT services.

Embarrassingly, but most often upon implementation, these tools end up pointing out bad housekeeping, like misconfigured DHCP or how network flows are being directed to discontinued IP addresses.

Despite the grand visions that we have for our IT environments, the ground level is not as stable as we expect or want it to be and will always be something that requires our attention.

One way of looking at it is the TCP/IP model of networking communications. Application monitoring tools are used to look at, troubleshoot and alert on the upper layer, as the name suggests, where transaction details can be decrypted for DPI – Deep Packet Inspection.

Following this the Transport, Internet and Physical Layers are the supporting communication layers and essentially constitute physical and virtualized network equipment, VLANs, Quality of Service Bands and their configurations – everything that the business applications need to serve the end users with information.

If this TCP/IP model is viewed as a tower of building blocks, which it does represent in many ways, it stands to reason that the foundational layers need to be in place and under control before the upper layers can be used to any effect.

These are areas and functions that need to be maintained.

Don’t take my word for it though, refer to any operational lifecycle or governance framework. Somewhere between the planning, design and operation of any service in IT, maintenance is required.
ITIL labels it as “Transition”, COBIT says “Review Effectiveness” and the Sintrex in-house methodology chose to call it “Verify”, but it still speaks to evaluating existing structures for effectiveness and performing maintenance where necessary.

But “If it isn’t broken, don’t fix it”, so unless something goes wrong and gets rectified, how would one maintain the lower layers of this tower?

There should be emphasis on the lowest level of the model continuously, but your focus can only move to the upper layers, provided that the current layers in focus mature into established processes of maintenance and upkeep.

This should ring true for anyone involved in networks, as the first port of call when assigning blame is invariably, the network. More trust in the network and higher visibility into the lower layers translates into less time you need to spend hunting basic errors.

And when an end user claims the ERP system is not working, IT support should first and foremost confirm that the physical network servicing the system is up and running.

If you can say with confidence that the basics are in place and the network is doing what it should, it enables you to build up from this foundation to view all the intricacies that depend on the network.

This is the level of confidence you should have in your network, before you should be able to put your trust in Application monitoring.

Mapping the future with information

As time moved on and more voyages were taken, the navigators steadily improved these maps, as new contributions and corrections were made. From the earliest maps, where the edge of the world was still a real concern, the quality of the information gradually improved until the maps became reliable enough for modern day use.  *Until satellites took all of the guess work out of it, that is!

Where the famous explorers of the ocean had maps, we have data.

So, if one were to consider oneself an explorer of information, data would be your map and very likely your company’s data. Looking from the right angle, you will find some striking similarities between the challenges that the naval explorers faced and what we have to accomplish with information that is sometimes not so reliable.

This just means we all have our own version of a map, just in the form of rows, columns and blobs.

Decisions need to be made based on data. Every industry has their own sort of data, from retail and food stores that have customer data, to Web Developers that have statistics gathered on their sites.

So, if you manage a store well or build cool websites, is it because you have good data knowledge?

It is possible to build a server for machine learning without too much of a learning curve.  Better yet, if you have a little more budget, you can rent a server that is ready to use for machine learning or AI space from several cloud-based providers. Almost sounds like we’re spoiled for choice when it comes to deriving value from machine learning and discovering trends in our data.
There is a snag though… You need to understand data science or have an expert in your company’s ranks to figure out what is actually going on.

Alternatively, you could make use of Business Intelligence to sort out the trending and analyses for you.
It is not as new and mysterious as AI and machine learning, but it does still come with a few of its own challenges.
Most BI tools need some level of skill and work very well if you have clean, quality data. But most data is just not clean. It’s an unfortunate truth we must live with and you can’t afford wasting time cleaning data when the month end report needs to be presented first thing tomorrow morning.

Beside these strategic concerns, you are still faced with the question of what you actually want to achieve using your data. Analyzing past trends is not going to do much for the company’s future.  It is easy to miss the point when you have your head stuck in reports analysing it to death.

Like the aforementioned explorers, you need to use the old map to guide you until you have reached its limit and then go beyond this… Making discoveries that could make all the difference to your company, creating new revenue streams or initiating cost savings! Data should be used to discover great new horizons based on what you have learned.

To utilize your data in this way, proper analysis and thus a great navigator is needed. This is where Sintrex comes in – we “navigate”, enabling you to make the right decisions based on your data!

Sintrex