Applications

Category: Applications

Fools and their tools

By Emile Biagio, CTO, Sintrex

Buy local, South Africans! You are creating sustainable careers for our youth!

If I had one buck for every time we get lured into a “software features” discussion with a potential client, I’d own an overstocked game farm!

Or what about the infamous feature shoot out or comparison spread sheet that shows the gaps between products? How many propeller heads have motivated 50% more spend for 20% more features? Hopefully, it was justified.

If you have ripped and replaced monitoring software in the past 3 years, or if you’ve invested in yet another tool to fill another gap that you thought was covered in the tools that you already have, then you’re doing something wrong.

Read carefully, you’re doing something wrong! Don’t go to market and find other tools… because it might just be the fool behind the tool – and not the tool.

Consider a process audit first. Look at what you should be doing, irrespective of the tool’s ability to facilitate the process.

If your process audit compliance is low because of a tool, then look for an alternative, but use your requirement framework to find the right fit.

If you make up a comparative list, I’d bet that of all the tasks and processes that you should be doing, less than 50% can be blamed on a tool that does not support it.

Here are a few considerations if you want to buy a tool – I know it’s probably only a fraction of what’s required, but it’s a good place to start:

  • Who’s going to install the tool?
  • Who updates the managed devices loaded for monitoring?
  • How often is it updated?
  • How must it be structured? (Location, SLA, Business Unit or technology based?)
  • Who sets the standards for devices to be monitoring compliant?
  • Who makes sure that the hardware and software resources are sufficient for the tool?
  • Who looks after the hardware?
  • Is there a database used for storage? Who is maintaining the DB?
  • Are the backups in place? Do you need a DR solution?
  • Who provides access to the system?
  • Who sets up the dashboards?
  • If there are integration requirements, who owns that and maintains it?
  • Who must be trained to use the tool? Who does the training?
  • Who disseminates information? If it’s ‘automated’, who sets it up?
  • Who must get what information?
  • What actions must be taken regarding specific information?
  • Who must watch screens and what do they do based on what they see?
  • Who must receive automated escalations? What must they do about it?

And if you don’t want to buy another tool, consider outsourcing it all and ask questions like these:

  • Will you (Service Provider) look after all ‘Tool’ required hardware, software, licenses, capacity, backups, administration, DR and…
  • Can I have a geographical view of all my outages?
  • Can I see all non-performing assets and stressed assets?
  • Can I evaluate capacity issues for all devices?
  • Can all my assets be tracked geographically?
  • Can I have all my assets collated in one area for data mining?
  • Can I mark all my SLAs monthly?
  • Can I see and measure user experience and application performance?
  • Can I check my IT provider compliance to standards and best practices?
  • Can I provide difference business units a view or report for their portion of the infrastructure?
  • Can I have an on-site Operations Centre or the option to reduce costs and host it off site?

Make sense? Cause now you’re moving away from looking at the tool. You’re making it someone else’s problem and ensuring that you get the required output to run your business and improve service delivery!

Application Monitoring – still haven’t found what you’re looking for?

In the IT Monitoring space, it has become a requirement to have eyes on everything in your infrastructure and everyone has become used to the single pane of glass, API integration with drill-through capability and full stack service monitoring.

As a result, many specialist companies are punting complete visibility of your entire infrastructure and positioning their tools as the panacea to keeping an eye on it all, the entire time.

Looking at the features and capabilities of the more prevalent vendors out there, it appears to be realistic enough, but can one specialist tool really manage all of this in one go?

Is it possible? Yes!

Does is ever work?  Hardly…!

Here’s why…

Supposing you have the Rolls Royce of application monitoring tools, as soon as you start investigating last hop network latency on a per transaction basis to troubleshoot your customer portal’s performance issues – or something just as intricate, but relevant to your IT service – in most cases you will find a mundanely basic network error is actually affecting normal service delivery.

Most of the marquee application monitoring tools that you come across can see any level of detail into the most critical IT services.

Embarrassingly, but most often upon implementation, these tools end up pointing out bad housekeeping, like misconfigured DHCP or how network flows are being directed to discontinued IP addresses.

Despite the grand visions that we have for our IT environments, the ground level is not as stable as we expect or want it to be and will always be something that requires our attention.

One way of looking at it is the TCP/IP model of networking communications. Application monitoring tools are used to look at, troubleshoot and alert on the upper layer, as the name suggests, where transaction details can be decrypted for DPI – Deep Packet Inspection.

Following this the Transport, Internet and Physical Layers are the supporting communication layers and essentially constitute physical and virtualized network equipment, VLANs, Quality of Service Bands and their configurations – everything that the business applications need to serve the end users with information.

If this TCP/IP model is viewed as a tower of building blocks, which it does represent in many ways, it stands to reason that the foundational layers need to be in place and under control before the upper layers can be used to any effect.

These are areas and functions that need to be maintained.

Don’t take my word for it though, refer to any operational lifecycle or governance framework. Somewhere between the planning, design and operation of any service in IT, maintenance is required.
ITIL labels it as “Transition”, COBIT says “Review Effectiveness” and the Sintrex in-house methodology chose to call it “Verify”, but it still speaks to evaluating existing structures for effectiveness and performing maintenance where necessary.

But “If it isn’t broken, don’t fix it”, so unless something goes wrong and gets rectified, how would one maintain the lower layers of this tower?

There should be emphasis on the lowest level of the model continuously, but your focus can only move to the upper layers, provided that the current layers in focus mature into established processes of maintenance and upkeep.

This should ring true for anyone involved in networks, as the first port of call when assigning blame is invariably, the network. More trust in the network and higher visibility into the lower layers translates into less time you need to spend hunting basic errors.

And when an end user claims the ERP system is not working, IT support should first and foremost confirm that the physical network servicing the system is up and running.

If you can say with confidence that the basics are in place and the network is doing what it should, it enables you to build up from this foundation to view all the intricacies that depend on the network.

This is the level of confidence you should have in your network, before you should be able to put your trust in Application monitoring.

Finding method in the madness

In 2018, IT spend in South Africa totaled a whopping R276.6 billion (Gartner, Inc). The core challenge remains determining how to derive value from all this investment…

People, Process and Technology:
This is the foundation of any established IT management domain. It may come with proprietary terminology and be straight from a governance framework playbook, or it could be a customized set of rules based on requirements for your own environment. Whether you are the owner, an outsourced service provider, consultant or even vendor, you will need to fulfill your role within the boundaries of this framework.

In the building, maintenance and management of any IT environment, direct costs are incurred from the People and the Technology. These two terms represent most expenditures from infrastructure to cloud and software subscriptions, to permanent employees, consultants and service providers.
Unpacking all of this, you end up with all of the building blocks that show up on the company’s financial ledgers against IT. All of these building blocks put together are what IT has to ensure delivery of a service that enables business.

To best deliver a service that satisfies requirements, there are questions that need answering like:

Which team looks after WAN CE links?
What dashboard(s) do you grant to service provider XYZ?
Which processes should I keep internal and which should I entrust to an outsourced provider?
What SLA’s need to be negotiated and imposed on each team involved in service delivery?

… to name but a few. As a whole, questions related to this will very likely range far beyond these and dig much more into particular details.

It is indeed a daunting challenge.

With due respect to all vendors and service providers, there is seldom a clear winner when it comes to a particular software toolset and the same can be said about service providers. A few obvious choices come to mind but most purchases, that are seen as strategically important, go through a like-for-like comparison, exhaustive and often extended proofs of concept as well as carefully negotiated contracts terms.

In and of its nature, this purchasing of software or signing of service agreements is something that experience can teach you. The rules of the game do not change all that often and investing in technology can – in most cases – be measured and justified by a prior success internally or reference sites where the same purchase has proven to be successful. “Company A” might thrive on open source software and specialists capable of running systems smoothly, in which case you could follow the same mix of skills and solution sets. Or “Company B” is able to show ROI on high end proprietary solutions that come with marquee price-tags, in which case a value proposition can be built.

Despite having to service your own unique environment, there is most certainly a recipe for buying toolsets and selecting service providers or employees to meet your own requirements.

Why then, do some succeed and others fails with certain technology toolsets?
OR at the same token, why do some partnerships with a service provider work, while others do not?

If you have followed along to this point, it should be obvious that the third part of the introductory management framework has been left well alone. It is the part that cannot be purchased, but instead that each management team needs to build, grow and evolve to suit the needs of their own business:

Process!

This is the “secret sauce” unique to each environment that can ensure success.
Without it, any mix of “end-to-end” solution sets won’t work, nor will any amount of product specialists run an IT environment successfully. The process is what defines who gets the correct information at the right time and what to do with this information.

There should be a set procedure for every eventuality in an IT infrastructure. If an event occurs that impacts the ERP system, what remedial steps need to be taken and who needs to be informed?
What is the time frame and standard response to resolve an incident? Which steps in this procedure is repeated and thus suitable to automate?

Unfortunately, process is often overlooked. Not to the extent there is no process, but rather that the process is not adapted to changes within the environment. The trick is to acknowledge that this is a “live” document and to ensure that these adaptations are made to keep in line with today’s environment.

A technology specialist might leave the organization and someone inexperienced is appointed as custodian of a system they cannot properly use. There might be changes in your company’s e-mail or internal communications and the automated messages are just not being delivered anymore. Any part of the IT infrastructure that changes will have an impact on the tools used and the people running them and vice versa.

Provided that the process is documented and available to the various stakeholders and responsible parties, the impact of the changes mentioned above can be minimized. It enables one person to hand over to his or her successor and could be used to outline minimum requirements for replacement software. The process lays the platform for growth and sustainability in how IT delivers value to business. You cannot get this off the shelf, but if it is not maintained, your process can be the most costly component of your IT management framework. We can therefore safely say that effective and efficient processes are paramount to deriving value from your investment in IT!

SD WAN’s impact on network monitoring

SD WAN providers claim that application performance can improve by up to forty times when migrated to SD WAN technologies…  That’s a phenomenal statistic! But how true is it? How did this number roll up to the marketing department to lure you into clicking the “Subscribe to SD-WAN” button?

Strategy guru Peter Drucker once said: “If you cannot measure it, you cannot improve it.” So these claims imply there being some form of measurement to back up the statistic. This also means that the initial concern is having the visibility to actually measure performance, before being able to improve on it.

Recently at the Interop ITX in Las Vegas, one of the breakfast briefings was hosted by the IDC. The topic was “Intelligent Automation of Networks” and, more specifically, the rise of “intent based networking.”

IDC claim that network visibility is critical for all companies looking to digitally transform or improve their cloud architecture deployment. Those facing pressure to support a massively complex infrastructure should start by taking a good, hard look at their network monitoring capabilities.

It’s not just about monitoring a massively complex infrastructure to ensure a better user experience, but also to baseline the current user experience to ensure that user experience actually does improve. Migration for the sake of trying to resolve user perceived problems may not yield the desired user satisfaction, increased productivity or operational saving.

Many years ago, Sintrex was at the forefront of monitoring client experience while enterprises were migrating from private WANs to service provider MPLS networks. It was essential to baseline existing service levels so that new service levels could be compared. It’s not much different now. To retain control, organisations need to retain visibility.

A couple of other predictions made by the IDC include:

  • In the near term (6-to-12 months), monitoring for SD-WAN links and specific SaaS services will see the greatest levels of investment.
  • Over the 12 to 24 month period, enterprises will invest in and integrate new network performance monitoring capabilities with existing application performance management platforms.

Sintrex Executive, Ludwig Myburgh asserts that “from a Sintrex perspective SD Networks do not have a major impact on our monitoring paradigm. Devices will still have IP addresses, with management capabilities, interconnected via Subnets and perform similar networking services. “

“The configuration and changes applied dynamically to these devices is where there is a major change to the traditional WAN paradigm. To monitor, store, check for compliance, track changes etc. we see ourselves playing a major role. Vendors are exposing the information via API’s and particularly RESTful API’s.”

“This is where Sintrex will interconnect and collate information, store in the CMDB and bring into a consolidated warehouse to provide holistic IT intelligence.”

“From a Fault, Performance and Flow perspective there are no major changes as most of the information is still available via SNMP and NetFlow for the network based platforms and WMI for the Windows environment.”

This article was published in partnership with Sintrex.

Standard

Partner update – ExtraHop introduces reveal (X)

ExtraHop Introduces Reveal(x) to Expose Attacks on Critical Assets and Automate Investigations

New Security Analytics Product Discovers and Contextualizes all Network Transactions to Surface High Risk Anomalies and Cut Investigation Time from Days to Minutes

SEATTLE – January 30, 2018 – ExtraHop, the leader in analytics for security and performance management, today announced the general availability of ExtraHop Reveal(x). This new security analytics product builds on enterprise-proven anomaly detection powered by wire data, giving security teams much-needed insight into what’s happening within the enterprise while automating the detection and investigation of threats. By analyzing all network interactions for abnormal behavior and identifying critical assets in the environment, Reveal(x) focuses analysts’ attention on the most important risks and streamlines response to limit exposure.

An Industry in Transition…

Security teams face a convergence of factors that complicate operations and decrease visibility. Hybrid and multi-cloud architectures increase agility but reduce operational control. Encryption is vital but disguises both benign and malicious activities. At the same time, businesses are shifting the emphasis from physical control points like endpoints and firewalls to logical perimeters such as trusted domains, privileged users, IoT, cloud, microservices, and containers. A new source of insight is required for modern architectures, one that provides empirical evidence to help analysts triage and investigate threats with confidence and timeliness.

“Attack surfaces are expanding and the sophistication of attackers is increasing. There simply aren’t enough talented security professionals to keep up,” said Jesse Rothstein, CTO and co-founder, ExtraHop. “Reveal(x) provides security teams with increased scrutiny of critical assets, detection of suspicious and anomalous behaviors, and workflows for both automated and streamlined investigation. We enable practitioners to do more with less by getting smarter about the data they already have.”

A Better Approach, A More Efficient Workflow

Reveal(x) addresses the gaps in security programs by harnessing wire data, which encompasses all information contained in application transactions. It auto-discovers, classifies, and prioritizes all devices, clients, and applications on the network and employs machine learning to deliver high-fidelity insights immediately. Anomalies are directly correlated with the attack chain and highlight hard-to-detect activities, including:

  • Internal reconnaissance — scans for open ports and active hosts, brute force attacks, attempted logins, and unusual access patterns.
  • Lateral movement — relocation from an original entry point, privilege escalation, and ransomware spread.
  • Command and control traffic — communications between a compromised host within the network and the targeted asset or an external host.
  • Exfiltration — large file transfers, unusual read/write patterns, and unusual application and user activity from an asset either directly or via a stopover host.

In a single unified system, Reveal(x) guides analysts to review relationships between these malicious activities and related evidence that informs disposition: the exhibited behavior, baselined measurements, transaction details, and assets involved. Live Activity Maps show communications in real time and can also replay transactions to illuminate the incident’s timing and scope. Detailed forensic evidence is just a click away, enabling immediate root cause determination using individual packets.

What Customers Are Saying

“When you work in a business dealing with the nation’s leading insurance companies, there is a lot of pressure to get it right. We rely on ExtraHop to provide us with the visibility needed to investigate performance and security issues,” said Chris Wenger, Senior Manager of Network & Telecommunication Systems at Mitchell International. “With ExtraHop in our IT environment, we can more easily monitor all of the communications coming into our network, including use of insecure protocols. These insights enable my team to better secure our environment. ExtraHop has been that extra layer of security for us.”

What Analysts Are Saying

“In security, your intelligence is only as good as the data source from which it’s derived,” said Eric Ogren, Senior Analyst at 451 Research. “The network is an ideal place to identify active computing devices and call out threats as they attempt to probe and communicate. ExtraHop Reveal(x) balances real-time critical asset insights with machine learning-based network traffic analytics to create visibility that will help security teams stay one step ahead of security incidents for those assets that matter most.”

What Partners Are Saying

“There are no silver bullets when it comes to identifying and managing risk within a business information security program. It’s a multidimensional problem that requires reliable sources of insight and best-of-breed technology,” said Tim O’Brien, Director of Security Operations at Trace3. “We are excited to integrate the power of ExtraHop Reveal(x) enterprise visibility and machine learning into our world-class security practice, helping our customers identify and address threats before they affect the business.”

For more information on ExtraHop Reveal(x), check out these additional resources:

Product Availability

ExtraHop Reveal(x) is available now in North America via ExtraHop’s value-added resellers for an annual subscription.

About ExtraHop

ExtraHop is the first place IT turns for insights that transform and secure the digital enterprise. By applying real-time analytics and machine learning to all digital interactions on the network, ExtraHop delivers instant and accurate insights that help IT improve security, performance, and the digital experience. Just ask the hundreds of global ExtraHop customers, including Sony, Lockheed Martin, Microsoft, Adobe, and Google.

Standard

Sintrex deploys automated testing to provide end-to-end service solutions

Sintrex, an Infrastructure Management Company based in South Africa, believes in quality service delivery and is passionate about the innovative pursuit of excellence in providing end-to-end IT solutions and services.

To help ensure this quality, they have worked hard to deploy automated testing to the Sintelligent modules they use to deliver their services to clients.

How Sintrex automated testing works

Automated testing makes use of a testing framework consisting of Jenkins and Selenium that allows test cases to be run unassisted anytime of the day or night to ensure functionality is working as expected.  ­­­

Sintrex’s automated software framework allows them to test changes made to their code base at regular intervals and allows them to do so with minimal assistance from testing team members.

By eliminating the need for a person to do the testing, it allows them to run these tests nightly and helps them to identify any issues caused by code changes almost as quickly as they happen.

The road ahead

As their test automation framework matures, Sintrex will be in a position to rely on their automation processes with enough confidence to release versions of the modules more frequently.

“Unlike solutions where the software sits in a central location and is accessed by multiples of clients, our software is deployed at each of our clients and used to provide the services we offer to that particular client,” says Gregory Hooper the Quality Assurance Manager.

This adds logistical challenges to their upgrade process, as each release must be deployed in an individual change control slot at each of their clients.

This means that, although there will be benefits in being able to deliver releases more frequently, Sintrex still need to work to find the sweet spot of regular intervals to deploy.

Through the deployment of automation testing, Sintrex is now able to offer their IT management solutions in even more accurate and relevant.

This gives you improved real-time infrastructure and service level information that is constantly available and will enable easy identification of service level lapses and minimised time of problem resolution.

To get insight into your business applications, IT infrastructure, or user network usage patterns, with Sintrex automated testing, please visit the Sintrex website.

Standard

Make your IT work smarter, not harder

As a leader in IT Infrastructure Management, Sintrex has earned a reputation for world-class end-to-end solutions with a personal touch.

While many products and services in the local IT sector are based on imported products, Sintrex provides local solutions.

By listening to and adapting services to each client’s unique needs, Sintrex offers a complete end-to-end service, with accurate, real-time data that ensures a desired level of performance is achieved and maintained in all IT assets throughout the business.

This service is built on the following four service pillars, which work to form a comprehensive IT management solution designed to enhance the customer’s experience of their IT:

  • Sintrex Infrastructure Management: This allows you to become a long-term partner in the accurate and efficient management of their IT infrastructure.
  • Sintrex Asset Management: Sintrex delivers real-time information on your assets to ensure the business is operating at maximum efficiency.
  • Sintrex Application Management: Sintrex strives to proactively detect and diagnose application performance problems to maintain a superior level of service for the business by monitoring and managing the performance and availability of software applications.
  • Sintrex SLA Management: Sintrex Service-level management provides for continual identification, monitoring and review of the levels of IT services specified in the Service Level Agreements (SLAs) the business has with multiple 3rd party suppliers and service providers.

What really sets Sintrex apart, however, is the motivated, credible and attentive team.

Through a culture of excellence, partnership and fun, Sintrex attracts and empowers staff with an inspirational work experience, world class software and globally renowned partners.

“Sintrex delivers great results because it mentors a great team,” said Keith Mclachlan, CEO at Sintrex.

A shared passion for client focused IT visibility solutions, brings the staff together, productively.”

Mclachlan contends that South African companies need to be at the forefront of developing young people and creating jobs.

Keeping revenue local empowers companies investing in the future of South Africa to mature their products, drive innovation and ultimately create employment opportunities.

Sintrex has a strong internship programme that currently sees 80% of its interns joining its team full-time.

Sintrex