Insights

Category: Insights

#data #information #4thindustrialrevolution

By Emile Biagio, CTO, Sintrex

I recently watched an investigative series set in the 1970’s, where a judge dismissed evidence that linked a suspect to a murder; he claimed that he does not believe in all this “scientific mumbo jumbo”. My my, how far we have progressed. Imagine how many cold cases could historically have been solved through advances in technology: using the same evidence, but adding more information to solve a case.

Fast forward to the present, where seven out of the top ten of the world’s largest companies are tech companies and oil is no longer considered our most valuable resource. Yup, not oil, but data!

Data? Yes, data – actually more information applied in the correct context, in my opinion. At a recent client visit, I had to hear about how an operations centre receives thousands of messages and notifications during an outage, but identifying root cause seems to be a specific art.

So, as is the norm today, this client has monitoring systems plugged into just about every critical application running on their infrastructure. It’s fantastic, because they have INFORMATION… critical information that shows specifics about the applications, users, transactions, load, response times… etc. This information empowers them to tweak, tune and adapt the systems to drive business productivity.

The problem is, when there is a glitch in the matrix, all the monitoring systems spew out thousands of messages to highlight anomalies. This is what we build, more and more systems that collect information. I pulled a statistic from another client (for interest): 489 Million messages in one month… that’s a lot. It’s about twenty hundred five and seventy messages a day (sorry Mr. Zuma, still funny).

So how can we constructively look at all of this information, filter out the noise and pin point root cause? Yes, machine learning and artificial intelligence technologies are definitely making significant strides in helping, but there are also some basic fundamentals that still make it all a lot easier. Maybe not from the 1970’s, but at least from the 1990’s:

  • A system that monitors your underlying common denominator, your network and automatically identifies root cause outages.
  • The ability to classify anomaly impact. E.g. Minor, Major, Critical.
  • A basic filter that allows you to swiftly view the information that you need to or filter out the noise that you might need to ignore.

If you apply a filter to a badly taken photo, it will look ok, but apply the same filter to a great photo and it’s suddenly brilliant! Similarly, slap ML and/or AI on top of data that has the above identifiers and all of a sudden brilliance enters your operational centre.

“Information is power, but only if people are able to access, understand and apply it.” ~ Unknown

Fools and their tools

By Emile Biagio, CTO, Sintrex

Buy local, South Africans! You are creating sustainable careers for our youth!

If I had one buck for every time we get lured into a “software features” discussion with a potential client, I’d own an overstocked game farm!

Or what about the infamous feature shoot out or comparison spread sheet that shows the gaps between products? How many propeller heads have motivated 50% more spend for 20% more features? Hopefully, it was justified.

If you have ripped and replaced monitoring software in the past 3 years, or if you’ve invested in yet another tool to fill another gap that you thought was covered in the tools that you already have, then you’re doing something wrong.

Read carefully, you’re doing something wrong! Don’t go to market and find other tools… because it might just be the fool behind the tool – and not the tool.

Consider a process audit first. Look at what you should be doing, irrespective of the tool’s ability to facilitate the process.

If your process audit compliance is low because of a tool, then look for an alternative, but use your requirement framework to find the right fit.

If you make up a comparative list, I’d bet that of all the tasks and processes that you should be doing, less than 50% can be blamed on a tool that does not support it.

Here are a few considerations if you want to buy a tool – I know it’s probably only a fraction of what’s required, but it’s a good place to start:

  • Who’s going to install the tool?
  • Who updates the managed devices loaded for monitoring?
  • How often is it updated?
  • How must it be structured? (Location, SLA, Business Unit or technology based?)
  • Who sets the standards for devices to be monitoring compliant?
  • Who makes sure that the hardware and software resources are sufficient for the tool?
  • Who looks after the hardware?
  • Is there a database used for storage? Who is maintaining the DB?
  • Are the backups in place? Do you need a DR solution?
  • Who provides access to the system?
  • Who sets up the dashboards?
  • If there are integration requirements, who owns that and maintains it?
  • Who must be trained to use the tool? Who does the training?
  • Who disseminates information? If it’s ‘automated’, who sets it up?
  • Who must get what information?
  • What actions must be taken regarding specific information?
  • Who must watch screens and what do they do based on what they see?
  • Who must receive automated escalations? What must they do about it?

And if you don’t want to buy another tool, consider outsourcing it all and ask questions like these:

  • Will you (Service Provider) look after all ‘Tool’ required hardware, software, licenses, capacity, backups, administration, DR and…
  • Can I have a geographical view of all my outages?
  • Can I see all non-performing assets and stressed assets?
  • Can I evaluate capacity issues for all devices?
  • Can all my assets be tracked geographically?
  • Can I have all my assets collated in one area for data mining?
  • Can I mark all my SLAs monthly?
  • Can I see and measure user experience and application performance?
  • Can I check my IT provider compliance to standards and best practices?
  • Can I provide difference business units a view or report for their portion of the infrastructure?
  • Can I have an on-site Operations Centre or the option to reduce costs and host it off site?

Make sense? Cause now you’re moving away from looking at the tool. You’re making it someone else’s problem and ensuring that you get the required output to run your business and improve service delivery!

Beauty vs. the Beast!

This thought pattern is bananas!

This thought pattern is bananas!

Most of us like bananas, right? We’re privileged to have access to bananas in areas where they do not grow naturally. We even have the luxury of choosing how many we want to buy and we can hand-pick them from hundreds on display!

But why do you buy bananas? Do you just buy for the sake of having a fruit snack? Are you making a fruit salad or perhaps banana bread? Do you buy them because they’re on sale and look REALLY good? And because they’re on sale, “let’s buy more and decide what to do with them after the purchase!” (Sounds like my wife…)

Have you noticed that if you purposely buy bananas for a specific reason, then you become very selective in your purchasing decision? Generally we would shop for ripe and maybe organic bananas to make really nice banana bread. Anything other than ripe really will not do.

Making the ideal banana bread requires a good recipe, some additional ingredients and some know-how. We could opt to purchase a pre-made banana bread, but we know that some people REALLY know how to make an excellent banana bread, so much so that you might ask them to make it for you!

So what?

So, what if I told you that the banana is your product and the banana bread is your required output? This would mean the additional ingredients, recipe and baker make up the services provided to get to the required output.

I use this metaphor to illustrate to many organisations that when they start looking at service companies to provide services, they should find someone that can provide them with the required output!

Most organisations – especially in IT – will use tech experts to review service companies and (as I’ve heard before) ask to “lift their skirts” and reveal components that make up the service offering…. i.e. “lift your skirts and show us your bananas, baker!” Mmmm, this metaphor just took a turn down the wrong path…

Let’s refocus! Don’t fall into the trap of evaluating products (bananas) when you know what you want as a service! Contract for the required output and let the service provider control the rest!

 

Application Monitoring – still haven’t found what you’re looking for?

In the IT Monitoring space, it has become a requirement to have eyes on everything in your infrastructure and everyone has become used to the single pane of glass, API integration with drill-through capability and full stack service monitoring.

As a result, many specialist companies are punting complete visibility of your entire infrastructure and positioning their tools as the panacea to keeping an eye on it all, the entire time.

Looking at the features and capabilities of the more prevalent vendors out there, it appears to be realistic enough, but can one specialist tool really manage all of this in one go?

Is it possible? Yes!

Does is ever work?  Hardly…!

Here’s why…

Supposing you have the Rolls Royce of application monitoring tools, as soon as you start investigating last hop network latency on a per transaction basis to troubleshoot your customer portal’s performance issues – or something just as intricate, but relevant to your IT service – in most cases you will find a mundanely basic network error is actually affecting normal service delivery.

Most of the marquee application monitoring tools that you come across can see any level of detail into the most critical IT services.

Embarrassingly, but most often upon implementation, these tools end up pointing out bad housekeeping, like misconfigured DHCP or how network flows are being directed to discontinued IP addresses.

Despite the grand visions that we have for our IT environments, the ground level is not as stable as we expect or want it to be and will always be something that requires our attention.

One way of looking at it is the TCP/IP model of networking communications. Application monitoring tools are used to look at, troubleshoot and alert on the upper layer, as the name suggests, where transaction details can be decrypted for DPI – Deep Packet Inspection.

Following this the Transport, Internet and Physical Layers are the supporting communication layers and essentially constitute physical and virtualized network equipment, VLANs, Quality of Service Bands and their configurations – everything that the business applications need to serve the end users with information.

If this TCP/IP model is viewed as a tower of building blocks, which it does represent in many ways, it stands to reason that the foundational layers need to be in place and under control before the upper layers can be used to any effect.

These are areas and functions that need to be maintained.

Don’t take my word for it though, refer to any operational lifecycle or governance framework. Somewhere between the planning, design and operation of any service in IT, maintenance is required.
ITIL labels it as “Transition”, COBIT says “Review Effectiveness” and the Sintrex in-house methodology chose to call it “Verify”, but it still speaks to evaluating existing structures for effectiveness and performing maintenance where necessary.

But “If it isn’t broken, don’t fix it”, so unless something goes wrong and gets rectified, how would one maintain the lower layers of this tower?

There should be emphasis on the lowest level of the model continuously, but your focus can only move to the upper layers, provided that the current layers in focus mature into established processes of maintenance and upkeep.

This should ring true for anyone involved in networks, as the first port of call when assigning blame is invariably, the network. More trust in the network and higher visibility into the lower layers translates into less time you need to spend hunting basic errors.

And when an end user claims the ERP system is not working, IT support should first and foremost confirm that the physical network servicing the system is up and running.

If you can say with confidence that the basics are in place and the network is doing what it should, it enables you to build up from this foundation to view all the intricacies that depend on the network.

This is the level of confidence you should have in your network, before you should be able to put your trust in Application monitoring.

SD WAN’s impact on network monitoring

SD WAN providers claim that application performance can improve by up to forty times when migrated to SD WAN technologies…  That’s a phenomenal statistic! But how true is it? How did this number roll up to the marketing department to lure you into clicking the “Subscribe to SD-WAN” button?

Strategy guru Peter Drucker once said: “If you cannot measure it, you cannot improve it.” So these claims imply there being some form of measurement to back up the statistic. This also means that the initial concern is having the visibility to actually measure performance, before being able to improve on it.

Recently at the Interop ITX in Las Vegas, one of the breakfast briefings was hosted by the IDC. The topic was “Intelligent Automation of Networks” and, more specifically, the rise of “intent based networking.”

IDC claim that network visibility is critical for all companies looking to digitally transform or improve their cloud architecture deployment. Those facing pressure to support a massively complex infrastructure should start by taking a good, hard look at their network monitoring capabilities.

It’s not just about monitoring a massively complex infrastructure to ensure a better user experience, but also to baseline the current user experience to ensure that user experience actually does improve. Migration for the sake of trying to resolve user perceived problems may not yield the desired user satisfaction, increased productivity or operational saving.

Many years ago, Sintrex was at the forefront of monitoring client experience while enterprises were migrating from private WANs to service provider MPLS networks. It was essential to baseline existing service levels so that new service levels could be compared. It’s not much different now. To retain control, organisations need to retain visibility.

A couple of other predictions made by the IDC include:

  • In the near term (6-to-12 months), monitoring for SD-WAN links and specific SaaS services will see the greatest levels of investment.
  • Over the 12 to 24 month period, enterprises will invest in and integrate new network performance monitoring capabilities with existing application performance management platforms.

Sintrex Executive, Ludwig Myburgh asserts that “from a Sintrex perspective SD Networks do not have a major impact on our monitoring paradigm. Devices will still have IP addresses, with management capabilities, interconnected via Subnets and perform similar networking services. “

“The configuration and changes applied dynamically to these devices is where there is a major change to the traditional WAN paradigm. To monitor, store, check for compliance, track changes etc. we see ourselves playing a major role. Vendors are exposing the information via API’s and particularly RESTful API’s.”

“This is where Sintrex will interconnect and collate information, store in the CMDB and bring into a consolidated warehouse to provide holistic IT intelligence.”

“From a Fault, Performance and Flow perspective there are no major changes as most of the information is still available via SNMP and NetFlow for the network based platforms and WMI for the Windows environment.”

This article was published in partnership with Sintrex.

Standard

Partner update – ExtraHop introduces reveal (X)

ExtraHop Introduces Reveal(x) to Expose Attacks on Critical Assets and Automate Investigations

New Security Analytics Product Discovers and Contextualizes all Network Transactions to Surface High Risk Anomalies and Cut Investigation Time from Days to Minutes

SEATTLE – January 30, 2018 – ExtraHop, the leader in analytics for security and performance management, today announced the general availability of ExtraHop Reveal(x). This new security analytics product builds on enterprise-proven anomaly detection powered by wire data, giving security teams much-needed insight into what’s happening within the enterprise while automating the detection and investigation of threats. By analyzing all network interactions for abnormal behavior and identifying critical assets in the environment, Reveal(x) focuses analysts’ attention on the most important risks and streamlines response to limit exposure.

An Industry in Transition…

Security teams face a convergence of factors that complicate operations and decrease visibility. Hybrid and multi-cloud architectures increase agility but reduce operational control. Encryption is vital but disguises both benign and malicious activities. At the same time, businesses are shifting the emphasis from physical control points like endpoints and firewalls to logical perimeters such as trusted domains, privileged users, IoT, cloud, microservices, and containers. A new source of insight is required for modern architectures, one that provides empirical evidence to help analysts triage and investigate threats with confidence and timeliness.

“Attack surfaces are expanding and the sophistication of attackers is increasing. There simply aren’t enough talented security professionals to keep up,” said Jesse Rothstein, CTO and co-founder, ExtraHop. “Reveal(x) provides security teams with increased scrutiny of critical assets, detection of suspicious and anomalous behaviors, and workflows for both automated and streamlined investigation. We enable practitioners to do more with less by getting smarter about the data they already have.”

A Better Approach, A More Efficient Workflow

Reveal(x) addresses the gaps in security programs by harnessing wire data, which encompasses all information contained in application transactions. It auto-discovers, classifies, and prioritizes all devices, clients, and applications on the network and employs machine learning to deliver high-fidelity insights immediately. Anomalies are directly correlated with the attack chain and highlight hard-to-detect activities, including:

  • Internal reconnaissance — scans for open ports and active hosts, brute force attacks, attempted logins, and unusual access patterns.
  • Lateral movement — relocation from an original entry point, privilege escalation, and ransomware spread.
  • Command and control traffic — communications between a compromised host within the network and the targeted asset or an external host.
  • Exfiltration — large file transfers, unusual read/write patterns, and unusual application and user activity from an asset either directly or via a stopover host.

In a single unified system, Reveal(x) guides analysts to review relationships between these malicious activities and related evidence that informs disposition: the exhibited behavior, baselined measurements, transaction details, and assets involved. Live Activity Maps show communications in real time and can also replay transactions to illuminate the incident’s timing and scope. Detailed forensic evidence is just a click away, enabling immediate root cause determination using individual packets.

What Customers Are Saying

“When you work in a business dealing with the nation’s leading insurance companies, there is a lot of pressure to get it right. We rely on ExtraHop to provide us with the visibility needed to investigate performance and security issues,” said Chris Wenger, Senior Manager of Network & Telecommunication Systems at Mitchell International. “With ExtraHop in our IT environment, we can more easily monitor all of the communications coming into our network, including use of insecure protocols. These insights enable my team to better secure our environment. ExtraHop has been that extra layer of security for us.”

What Analysts Are Saying

“In security, your intelligence is only as good as the data source from which it’s derived,” said Eric Ogren, Senior Analyst at 451 Research. “The network is an ideal place to identify active computing devices and call out threats as they attempt to probe and communicate. ExtraHop Reveal(x) balances real-time critical asset insights with machine learning-based network traffic analytics to create visibility that will help security teams stay one step ahead of security incidents for those assets that matter most.”

What Partners Are Saying

“There are no silver bullets when it comes to identifying and managing risk within a business information security program. It’s a multidimensional problem that requires reliable sources of insight and best-of-breed technology,” said Tim O’Brien, Director of Security Operations at Trace3. “We are excited to integrate the power of ExtraHop Reveal(x) enterprise visibility and machine learning into our world-class security practice, helping our customers identify and address threats before they affect the business.”

For more information on ExtraHop Reveal(x), check out these additional resources:

Product Availability

ExtraHop Reveal(x) is available now in North America via ExtraHop’s value-added resellers for an annual subscription.

About ExtraHop

ExtraHop is the first place IT turns for insights that transform and secure the digital enterprise. By applying real-time analytics and machine learning to all digital interactions on the network, ExtraHop delivers instant and accurate insights that help IT improve security, performance, and the digital experience. Just ask the hundreds of global ExtraHop customers, including Sony, Lockheed Martin, Microsoft, Adobe, and Google.

Standard

Upwards and onwards – Sintrex internship graduates looking back

One core aspect of the Sintrex culture is empowering employees.

The Sintrex Internship Programme not only upskills IT graduates but it also gives them insight into a professional IT environment, where they can learn and explore what it means to be an IT engineer.

After another successful intern cycle, we decided to explore what Sintrex staff (formerly interns)  had learned.

Employees learned the importance of hard work, prioritisation, time management, teamwork, perseverance, persistence and how to face new challenges.

“While I have learned many things in the year I interned for Sintrex, the main lesson has been that by trial-and-error we learn and grow – anything can be done, if we are determined enough to get it right and learn from our mistakes.

These lessons extended beyond mere work, as interns reported becoming more patient, understanding, organised and more balanced in work and life.

“As an introvert, I have developed social skills and became more social with everyone I’m working with, and the new people I meet.

“Prior experience does not determine who or what you are.

“It is all about your ability to adapt to the new situation that has been presented to you within the structure of the business, and sometimes the hardest lesson that one needs to learn is not to be a slave to conformity but to re-invent oneself to the task and opportunities that have been given to you.

Getting a head start in the IT industry

The Sintrex internship programme was launched in 2016 with the goal of creating a talent pipeline of potential employees, either for Sintrex or other ICT companies in Africa.

The interns report that Sintrex’s programme “just felt right” to them:

“I applied for the internship because it was a great opportunity to learn and grow in an IT environment; this is a one in a million opportunity, and I would not say no to a career-changing move.

They said that the programme is well-constructed and executed, and offered a professional working environment, as well as opportunity for career growth.

Some were informed about the opportunity from friends who worked at Sintrex and had experienced the advantages of the internship first-hand.

“I was happy; I decided to apply for the internship, as my friend suggested, because I’m doing the work that I always wanted to do, while learning every day and loving the work I do even more.

Of the best experiences of the internship, many of the interns commented on the great company culture, saying that it is great to “work with such an awesome diverse group of people”, and to “enjoy a braai with colleagues that you can call friends.”

“You get to socialise on a casual level with everyone in the company, even the CEO… Not many companies offer that.

Working at Sintrex

The social structure of the company allows for better teamwork, the interns reported.

“It is easier to understand one another on a professional level, if you have a personal understanding of how everyone in your team works.”

While work is fast-paced, it is also fun and offers an environment to learn and grow, with many senior staff happy to provide guidance.

“Every single environment you work in within Sintrex, will always have the best personalities to learn from, and there is a type of family bond that you start to grow with the colleagues around you.

“Sintrex is a very professional company and will always treat one another with respect and dignity.

All new staff are looking forward to their careers at Sintrex, saying that they expect to grow, both in their careers and in their personal lives.

“I look forward to a career where I can continue to study, with the freedom to explore my options of what interests me most in IT.

Interested in a Sintrex internship?

For those interested in a Sintrex internship, the graduates affirm that “if you are interested, you cannot go wrong – if this is your passion and interest, Sintrex is the perfect place to start your career and learn the ropes in a corporate and professional environment.”

The interns were impressed by how much Sintrex dedicates to them, saying, “They truly invest in one’s career.”

“Do not even second-guess your decision to apply for the internship at Sintrex, as it will give you more than you ever expected.

“To become part of the Sintrex team is rewarding with the social events and all-round atmosphere within the company.

Standard

It geeks hate operational monotony: why you should adjust your sails

“I can’t change the direction of the wind, but I can adjust my sails to always reach my destination.” Jimmy Dean

25 years ago…
Change in IT happened so often that the only constant was upgrading technology: from XT to AT workstations, from thick Ethernet cabling to thin Ethernet, from Token Ring topologies to Bus topologies and from binding IPX to TCP/IP protocols on NICs.  Techs were techs – having a variety of technical skills; from writing little DOS menus to re-cabling a building with your own RG-58 cabling tools, while explaining the difference between a floppy and a stiffy to a new user. The IT team was “The IT Team” – no specialists. Some just knew a little more than others. Everything seems to be project driven and operational issues were small issues that had to be resolved along the way as part of the ‘project’.

As time progressed…
Some technologies became adopted standards as implemented by most organisations, like star topology reticulation and TCP/IP. Techs started to specialise and businesses started looking for specific skills in the market to address certain technology needs.  Desktop support engineers faced the irate users while network engineers fiddled with routing protocols that could halt a company’s business operations with a small typo on a subnet mask.

The bigger the company infrastructures grew, the more controls and processes were implemented.  And the one thing that Techs hate is exactly that: the monotony of “more of the same”. Techs want change – new toys, upgrades, new features, and exciting bells and whistles. Businesses, on the other hand, want stability, productivity, uptime, less change, no outages, ease of use, etc.

Today…
Understandably, even when things go wrong, Techs would rather be working on new projects and new technology than trying to figure out what is now causing the storm in the cloud. With the evolution of systems and applications, Techs have been saved! Today, there are systems available that can pinpoint issues in any IT environment for a quick resolution, but here’s the curveball…. These systems too, need to be maintained. Which brings us back to the modern expectation of specialisation (internal resources are very rarely specialised in external systems) and operational monotony (which, as we stressed, Techies through the ages simply “love”!)

The solution…
It is quite simple. As quoted above – you need to accept the reality and adjust your sails! A third, independent party like Sintrex specialises in a variety of tools and systems and for us, it is not monotonous, but exciting! Our services come with a variety of additional benefits. It is important to remember, despite systems that “pin point issues in any IT environment for quick resolution”- if you cannot see it or have not loaded it for monitoring, then you cannot monitor it! And without monitoring, there can be no quick resolution, only frustrated Techs and increased pressure from business.

Come chat to us at the MyBroadband Conference on 26 October. We would love to explain how we can assist in adjusting your sails, to set you on the course for success!

Standard

Sintrex deploys automated testing to provide end-to-end service solutions

Sintrex, an Infrastructure Management Company based in South Africa, believes in quality service delivery and is passionate about the innovative pursuit of excellence in providing end-to-end IT solutions and services.

To help ensure this quality, they have worked hard to deploy automated testing to the Sintelligent modules they use to deliver their services to clients.

How Sintrex automated testing works

Automated testing makes use of a testing framework consisting of Jenkins and Selenium that allows test cases to be run unassisted anytime of the day or night to ensure functionality is working as expected.  ­­­

Sintrex’s automated software framework allows them to test changes made to their code base at regular intervals and allows them to do so with minimal assistance from testing team members.

By eliminating the need for a person to do the testing, it allows them to run these tests nightly and helps them to identify any issues caused by code changes almost as quickly as they happen.

The road ahead

As their test automation framework matures, Sintrex will be in a position to rely on their automation processes with enough confidence to release versions of the modules more frequently.

“Unlike solutions where the software sits in a central location and is accessed by multiples of clients, our software is deployed at each of our clients and used to provide the services we offer to that particular client,” says Gregory Hooper the Quality Assurance Manager.

This adds logistical challenges to their upgrade process, as each release must be deployed in an individual change control slot at each of their clients.

This means that, although there will be benefits in being able to deliver releases more frequently, Sintrex still need to work to find the sweet spot of regular intervals to deploy.

Through the deployment of automation testing, Sintrex is now able to offer their IT management solutions in even more accurate and relevant.

This gives you improved real-time infrastructure and service level information that is constantly available and will enable easy identification of service level lapses and minimised time of problem resolution.

To get insight into your business applications, IT infrastructure, or user network usage patterns, with Sintrex automated testing, please visit the Sintrex website.

1 2
Sintrex