Insights

Category: Insights

Don’t sit back and watch things break, drive changes for improvement!

17 February 2020

Don’t sit back and watch things break, drive changes for improvement!

Monitoring systems that statistically indicate that things are bad, implies that certain actions must be taken to rectify. “But you’re telling me what my users are already telling me” is NOT the desired response.

Yes, monitoring systems will measure your users’ experience and provide you with factual proof that the user experience may be terrible, which you may require.

More importantly, however, it will also tell you why the experience is bad. This is why you need to understand the information – so that you can assist with actions to rectify.

There is no silver bullet, and nothing beats elbow grease to get systems running optimally.

So, for monitoring to add true value, you should:

  • understand the statistics and measures that you’re looking at and
  • be prepared for a service improvement project or focus group to action a few things to resolve.

The first bullet point above should be fairly easy – your monitoring provider can teach you all about the metrics and measures.

On the second point, however, you should be willing to do a little research and interact with more departments and/or service providers.

Example: you measure (monitor): network performance, application response and transaction speeds to the back-end database and find that users experience slow responses because of the database. You would then need to start investigating all aspects around the database.

This may not be an area that you’re familiar with or responsible for, but it’s an area that you need to stick your nose into because, in this scenario, it’s the area that needs tweaking to improve user experience of an application.

A monitoring tool does not fix things for you, despite development and progress in AI and machine learning – we’re not there yet.

Keeping to the above example/scenario, you could start by finding out who maintains the database, who looks after the hardware or VM that hosts the DB.

Then ask those people about maintenance, performance, size, speed, optimization options, etc… Ask “silly” questions and Google a lot. Each of these interactions should spawn a few actions that could improve the performance of the database.

After each action, re-check the measured user experience until you start noticing performance improvements.

In this way, you are not a passive consumer of data which ultimately adds no value – you are an active force of change, that helps improve performance! Remember to document your learning in a knowledge base… and keep measuring!

“We all know how important it is to ensure that business-critical applications are constantly up and running, but this is dependent on the effectiveness of the underlying infrastructure. It has never been more important for companies to understand how critical business services, the IT infrastructure, and applications work together, because a failure in one area can have a negative domino effect on others,” says Sintrex’s CTO Emile Biagio.

He adds that having an infrastructure that works does not necessarily mean that it’s healthy or available, monitoring is therefore a vital aspect in obtaining the insight needed to ensure optimal functionality. “The failure of one switch might not seem like a big deal, but can become mission-critical in one area of the business. The underlying infrastructure might be working, but if glitches occur, users will encounter challenges and complain about their IT ‘not working properly’.”

Maintaining a stable and functional infrastructure rests on an end-to-end monitoring approach. “An overview of your entire estate”, Biagio points out.

All of the elements that make up the business system need to be looked at from the perspective of the infrastructure, the applications, and the end user experience.

Only with this holistic approach can companies gain insight over their availability, health and ability to trade.

“The business is connected through a network – whether a local area network (LAN), a wide area network (WAN), or both. If there is a problem with a connection at any point on any of these networks, the users often associate the challenges they encounter with the applications they are trying to access rather than the network. Similarly, many workers these days are mobile, and can encounter problems accessing the organisation from external locations. Monitoring the IT infrastructure must therefore start with evaluating the connectivity enabled by the network.”

Connectivity is a key foundation upon which any infrastructure is dependent on, but workload and applications availability fuels any business’s productivity on a daily basis, therefor these areas must be monitored to ensure business continuity.

“The right monitoring approach can provide a comprehensive overview of the health of the infrastructure. This can be achieved with different levels of insight, so business can have an overview without having to know the specifics of the technical aspects, while IT can gains deep understanding and useful fast effective problem resolution time,” Biagio says.

“Proactive awareness of what is going on across the infrastructure allows for improved user experience as well as pre-emptive fault resolution. Not only is understanding the health of the infrastructure vital to the smooth operation of any business, it reduces costs in the long run, mitigates risks and effective planning.”

Wispeco and Sintrex hold hands to improve productivity

Wispeco and Sintrex hold hands to improve productivity

Wispeco Aluminium is the largest aluminium extrusion company in South Africa.

The company recently suffered issues after implementing SYSPRO ERP for its branches across the country.   The company’s IT team tried to work out the issues themselves, but after failing to do so despite trying various things, Wispeco contacted Sintrex for assistance.  Wispeco had assumed that the issues were pertaining to the company’s network, despite not having accurate statistics to confirm this.

Sintrex executed a complete audit and investigation into the network, server, and server environment of Wispeco’s network.  “The question is always ‘who audits the auditors’ and I needed somebody that could give me an independent and thorough assessment of my network,” said Pieter Heyns, Head of IT at Wispeco. 

According to Heyns, Sintrex was able to pinpoint exactly what Wispeco’s problems were. “They gave us very good feedback and an action plan we could use,” said Heyns.

Sintrex was able to determine that Wispeco’s issues were not with their network, which was found to be stable after extensive testing.  Instead, the latency issues were situated within the server.  Sintrex also managed to uncover that Wispeco was not being allocated the bandwidth it was paying for at one of its sites. 

“From the account management side, through to the technical teams, Sintrex is a very professional organisation with very capable people,” said Heyns.  “You need facts to make decisions, and Sintrex was able to provide us with these facts.  ”Heyns said that he would definitely recommend Sintrex.   “The way the Cape Town and Joburg offices work together, their strong focus on project management, the fact that they give you regular updates, and they way they push for results were all very positive to me.”

Watch the full case study below.

When businesses face lag and latency issues, many automatically assume that the issue lies with their network. In truth, there are various possibilities in such scenarios, which makes it important to use a knowledgeable third-party to determine the root of your issues. Sintrex is a leading South African infrastructure management company that offers end-to-end IT solutions and services. They have proven that they are capable of assessing and diagnosing issues in a business’s IT systems, as is proven in the below case study. Sintrex is committed to offering the best IT solutions, and strives to offer superior service and results to its customers.

#data #information #4thindustrialrevolution

By Emile Biagio, CTO, Sintrex

I recently watched an investigative series set in the 1970’s, where a judge dismissed evidence that linked a suspect to a murder; he claimed that he does not believe in all this “scientific mumbo jumbo”. My my, how far we have progressed. Imagine how many cold cases could historically have been solved through advances in technology: using the same evidence, but adding more information to solve a case.

Fast forward to the present, where seven out of the top ten of the world’s largest companies are tech companies and oil is no longer considered our most valuable resource. Yup, not oil, but data!

Data? Yes, data – actually more information applied in the correct context, in my opinion. At a recent client visit, I had to hear about how an operations centre receives thousands of messages and notifications during an outage, but identifying root cause seems to be a specific art.

So, as is the norm today, this client has monitoring systems plugged into just about every critical application running on their infrastructure. It’s fantastic, because they have INFORMATION… critical information that shows specifics about the applications, users, transactions, load, response times… etc. This information empowers them to tweak, tune and adapt the systems to drive business productivity.

The problem is, when there is a glitch in the matrix, all the monitoring systems spew out thousands of messages to highlight anomalies. This is what we build, more and more systems that collect information. I pulled a statistic from another client (for interest): 489 Million messages in one month… that’s a lot. It’s about twenty hundred five and seventy messages a day (sorry Mr. Zuma, still funny).

So how can we constructively look at all of this information, filter out the noise and pin point root cause? Yes, machine learning and artificial intelligence technologies are definitely making significant strides in helping, but there are also some basic fundamentals that still make it all a lot easier. Maybe not from the 1970’s, but at least from the 1990’s:

  • A system that monitors your underlying common denominator, your network and automatically identifies root cause outages.
  • The ability to classify anomaly impact. E.g. Minor, Major, Critical.
  • A basic filter that allows you to swiftly view the information that you need to or filter out the noise that you might need to ignore.

If you apply a filter to a badly taken photo, it will look ok, but apply the same filter to a great photo and it’s suddenly brilliant! Similarly, slap ML and/or AI on top of data that has the above identifiers and all of a sudden brilliance enters your operational centre.

“Information is power, but only if people are able to access, understand and apply it.” ~ Unknown

Fools and their tools

By Emile Biagio, CTO, Sintrex

Buy local, South Africans! You are creating sustainable careers for our youth!

If I had one buck for every time we get lured into a “software features” discussion with a potential client, I’d own an overstocked game farm!

Or what about the infamous feature shoot out or comparison spread sheet that shows the gaps between products? How many propeller heads have motivated 50% more spend for 20% more features? Hopefully, it was justified.

If you have ripped and replaced monitoring software in the past 3 years, or if you’ve invested in yet another tool to fill another gap that you thought was covered in the tools that you already have, then you’re doing something wrong.

Read carefully, you’re doing something wrong! Don’t go to market and find other tools… because it might just be the fool behind the tool – and not the tool.

Consider a process audit first. Look at what you should be doing, irrespective of the tool’s ability to facilitate the process.

If your process audit compliance is low because of a tool, then look for an alternative, but use your requirement framework to find the right fit.

If you make up a comparative list, I’d bet that of all the tasks and processes that you should be doing, less than 50% can be blamed on a tool that does not support it.

Here are a few considerations if you want to buy a tool – I know it’s probably only a fraction of what’s required, but it’s a good place to start:

  • Who’s going to install the tool?
  • Who updates the managed devices loaded for monitoring?
  • How often is it updated?
  • How must it be structured? (Location, SLA, Business Unit or technology based?)
  • Who sets the standards for devices to be monitoring compliant?
  • Who makes sure that the hardware and software resources are sufficient for the tool?
  • Who looks after the hardware?
  • Is there a database used for storage? Who is maintaining the DB?
  • Are the backups in place? Do you need a DR solution?
  • Who provides access to the system?
  • Who sets up the dashboards?
  • If there are integration requirements, who owns that and maintains it?
  • Who must be trained to use the tool? Who does the training?
  • Who disseminates information? If it’s ‘automated’, who sets it up?
  • Who must get what information?
  • What actions must be taken regarding specific information?
  • Who must watch screens and what do they do based on what they see?
  • Who must receive automated escalations? What must they do about it?

And if you don’t want to buy another tool, consider outsourcing it all and ask questions like these:

  • Will you (Service Provider) look after all ‘Tool’ required hardware, software, licenses, capacity, backups, administration, DR and…
  • Can I have a geographical view of all my outages?
  • Can I see all non-performing assets and stressed assets?
  • Can I evaluate capacity issues for all devices?
  • Can all my assets be tracked geographically?
  • Can I have all my assets collated in one area for data mining?
  • Can I mark all my SLAs monthly?
  • Can I see and measure user experience and application performance?
  • Can I check my IT provider compliance to standards and best practices?
  • Can I provide difference business units a view or report for their portion of the infrastructure?
  • Can I have an on-site Operations Centre or the option to reduce costs and host it off site?

Make sense? Cause now you’re moving away from looking at the tool. You’re making it someone else’s problem and ensuring that you get the required output to run your business and improve service delivery!

Beauty vs. the Beast!

This thought pattern is bananas!

This thought pattern is bananas!

Most of us like bananas, right? We’re privileged to have access to bananas in areas where they do not grow naturally. We even have the luxury of choosing how many we want to buy and we can hand-pick them from hundreds on display!

But why do you buy bananas? Do you just buy for the sake of having a fruit snack? Are you making a fruit salad or perhaps banana bread? Do you buy them because they’re on sale and look REALLY good? And because they’re on sale, “let’s buy more and decide what to do with them after the purchase!” (Sounds like my wife…)

Have you noticed that if you purposely buy bananas for a specific reason, then you become very selective in your purchasing decision? Generally we would shop for ripe and maybe organic bananas to make really nice banana bread. Anything other than ripe really will not do.

Making the ideal banana bread requires a good recipe, some additional ingredients and some know-how. We could opt to purchase a pre-made banana bread, but we know that some people REALLY know how to make an excellent banana bread, so much so that you might ask them to make it for you!

So what?

So, what if I told you that the banana is your product and the banana bread is your required output? This would mean the additional ingredients, recipe and baker make up the services provided to get to the required output.

I use this metaphor to illustrate to many organisations that when they start looking at service companies to provide services, they should find someone that can provide them with the required output!

Most organisations – especially in IT – will use tech experts to review service companies and (as I’ve heard before) ask to “lift their skirts” and reveal components that make up the service offering…. i.e. “lift your skirts and show us your bananas, baker!” Mmmm, this metaphor just took a turn down the wrong path…

Let’s refocus! Don’t fall into the trap of evaluating products (bananas) when you know what you want as a service! Contract for the required output and let the service provider control the rest!

 

Application Monitoring – still haven’t found what you’re looking for?

In the IT Monitoring space, it has become a requirement to have eyes on everything in your infrastructure and everyone has become used to the single pane of glass, API integration with drill-through capability and full stack service monitoring.

As a result, many specialist companies are punting complete visibility of your entire infrastructure and positioning their tools as the panacea to keeping an eye on it all, the entire time.

Looking at the features and capabilities of the more prevalent vendors out there, it appears to be realistic enough, but can one specialist tool really manage all of this in one go?

Is it possible? Yes!

Does is ever work?  Hardly…!

Here’s why…

Supposing you have the Rolls Royce of application monitoring tools, as soon as you start investigating last hop network latency on a per transaction basis to troubleshoot your customer portal’s performance issues – or something just as intricate, but relevant to your IT service – in most cases you will find a mundanely basic network error is actually affecting normal service delivery.

Most of the marquee application monitoring tools that you come across can see any level of detail into the most critical IT services.

Embarrassingly, but most often upon implementation, these tools end up pointing out bad housekeeping, like misconfigured DHCP or how network flows are being directed to discontinued IP addresses.

Despite the grand visions that we have for our IT environments, the ground level is not as stable as we expect or want it to be and will always be something that requires our attention.

One way of looking at it is the TCP/IP model of networking communications. Application monitoring tools are used to look at, troubleshoot and alert on the upper layer, as the name suggests, where transaction details can be decrypted for DPI – Deep Packet Inspection.

Following this the Transport, Internet and Physical Layers are the supporting communication layers and essentially constitute physical and virtualized network equipment, VLANs, Quality of Service Bands and their configurations – everything that the business applications need to serve the end users with information.

If this TCP/IP model is viewed as a tower of building blocks, which it does represent in many ways, it stands to reason that the foundational layers need to be in place and under control before the upper layers can be used to any effect.

These are areas and functions that need to be maintained.

Don’t take my word for it though, refer to any operational lifecycle or governance framework. Somewhere between the planning, design and operation of any service in IT, maintenance is required.
ITIL labels it as “Transition”, COBIT says “Review Effectiveness” and the Sintrex in-house methodology chose to call it “Verify”, but it still speaks to evaluating existing structures for effectiveness and performing maintenance where necessary.

But “If it isn’t broken, don’t fix it”, so unless something goes wrong and gets rectified, how would one maintain the lower layers of this tower?

There should be emphasis on the lowest level of the model continuously, but your focus can only move to the upper layers, provided that the current layers in focus mature into established processes of maintenance and upkeep.

This should ring true for anyone involved in networks, as the first port of call when assigning blame is invariably, the network. More trust in the network and higher visibility into the lower layers translates into less time you need to spend hunting basic errors.

And when an end user claims the ERP system is not working, IT support should first and foremost confirm that the physical network servicing the system is up and running.

If you can say with confidence that the basics are in place and the network is doing what it should, it enables you to build up from this foundation to view all the intricacies that depend on the network.

This is the level of confidence you should have in your network, before you should be able to put your trust in Application monitoring.

SD WAN’s impact on network monitoring

SD WAN providers claim that application performance can improve by up to forty times when migrated to SD WAN technologies…  That’s a phenomenal statistic! But how true is it? How did this number roll up to the marketing department to lure you into clicking the “Subscribe to SD-WAN” button?

Strategy guru Peter Drucker once said: “If you cannot measure it, you cannot improve it.” So these claims imply there being some form of measurement to back up the statistic. This also means that the initial concern is having the visibility to actually measure performance, before being able to improve on it.

Recently at the Interop ITX in Las Vegas, one of the breakfast briefings was hosted by the IDC. The topic was “Intelligent Automation of Networks” and, more specifically, the rise of “intent based networking.”

IDC claim that network visibility is critical for all companies looking to digitally transform or improve their cloud architecture deployment. Those facing pressure to support a massively complex infrastructure should start by taking a good, hard look at their network monitoring capabilities.

It’s not just about monitoring a massively complex infrastructure to ensure a better user experience, but also to baseline the current user experience to ensure that user experience actually does improve. Migration for the sake of trying to resolve user perceived problems may not yield the desired user satisfaction, increased productivity or operational saving.

Many years ago, Sintrex was at the forefront of monitoring client experience while enterprises were migrating from private WANs to service provider MPLS networks. It was essential to baseline existing service levels so that new service levels could be compared. It’s not much different now. To retain control, organisations need to retain visibility.

A couple of other predictions made by the IDC include:

  • In the near term (6-to-12 months), monitoring for SD-WAN links and specific SaaS services will see the greatest levels of investment.
  • Over the 12 to 24 month period, enterprises will invest in and integrate new network performance monitoring capabilities with existing application performance management platforms.

Sintrex Executive, Ludwig Myburgh asserts that “from a Sintrex perspective SD Networks do not have a major impact on our monitoring paradigm. Devices will still have IP addresses, with management capabilities, interconnected via Subnets and perform similar networking services. “

“The configuration and changes applied dynamically to these devices is where there is a major change to the traditional WAN paradigm. To monitor, store, check for compliance, track changes etc. we see ourselves playing a major role. Vendors are exposing the information via API’s and particularly RESTful API’s.”

“This is where Sintrex will interconnect and collate information, store in the CMDB and bring into a consolidated warehouse to provide holistic IT intelligence.”

“From a Fault, Performance and Flow perspective there are no major changes as most of the information is still available via SNMP and NetFlow for the network based platforms and WMI for the Windows environment.”

This article was published in partnership with Sintrex.

Standard

Partner update – ExtraHop introduces reveal (X)

ExtraHop Introduces Reveal(x) to Expose Attacks on Critical Assets and Automate Investigations

New Security Analytics Product Discovers and Contextualizes all Network Transactions to Surface High Risk Anomalies and Cut Investigation Time from Days to Minutes

SEATTLE – January 30, 2018 – ExtraHop, the leader in analytics for security and performance management, today announced the general availability of ExtraHop Reveal(x). This new security analytics product builds on enterprise-proven anomaly detection powered by wire data, giving security teams much-needed insight into what’s happening within the enterprise while automating the detection and investigation of threats. By analyzing all network interactions for abnormal behavior and identifying critical assets in the environment, Reveal(x) focuses analysts’ attention on the most important risks and streamlines response to limit exposure.

An Industry in Transition…

Security teams face a convergence of factors that complicate operations and decrease visibility. Hybrid and multi-cloud architectures increase agility but reduce operational control. Encryption is vital but disguises both benign and malicious activities. At the same time, businesses are shifting the emphasis from physical control points like endpoints and firewalls to logical perimeters such as trusted domains, privileged users, IoT, cloud, microservices, and containers. A new source of insight is required for modern architectures, one that provides empirical evidence to help analysts triage and investigate threats with confidence and timeliness.

“Attack surfaces are expanding and the sophistication of attackers is increasing. There simply aren’t enough talented security professionals to keep up,” said Jesse Rothstein, CTO and co-founder, ExtraHop. “Reveal(x) provides security teams with increased scrutiny of critical assets, detection of suspicious and anomalous behaviors, and workflows for both automated and streamlined investigation. We enable practitioners to do more with less by getting smarter about the data they already have.”

A Better Approach, A More Efficient Workflow

Reveal(x) addresses the gaps in security programs by harnessing wire data, which encompasses all information contained in application transactions. It auto-discovers, classifies, and prioritizes all devices, clients, and applications on the network and employs machine learning to deliver high-fidelity insights immediately. Anomalies are directly correlated with the attack chain and highlight hard-to-detect activities, including:

  • Internal reconnaissance — scans for open ports and active hosts, brute force attacks, attempted logins, and unusual access patterns.
  • Lateral movement — relocation from an original entry point, privilege escalation, and ransomware spread.
  • Command and control traffic — communications between a compromised host within the network and the targeted asset or an external host.
  • Exfiltration — large file transfers, unusual read/write patterns, and unusual application and user activity from an asset either directly or via a stopover host.

In a single unified system, Reveal(x) guides analysts to review relationships between these malicious activities and related evidence that informs disposition: the exhibited behavior, baselined measurements, transaction details, and assets involved. Live Activity Maps show communications in real time and can also replay transactions to illuminate the incident’s timing and scope. Detailed forensic evidence is just a click away, enabling immediate root cause determination using individual packets.

What Customers Are Saying

“When you work in a business dealing with the nation’s leading insurance companies, there is a lot of pressure to get it right. We rely on ExtraHop to provide us with the visibility needed to investigate performance and security issues,” said Chris Wenger, Senior Manager of Network & Telecommunication Systems at Mitchell International. “With ExtraHop in our IT environment, we can more easily monitor all of the communications coming into our network, including use of insecure protocols. These insights enable my team to better secure our environment. ExtraHop has been that extra layer of security for us.”

What Analysts Are Saying

“In security, your intelligence is only as good as the data source from which it’s derived,” said Eric Ogren, Senior Analyst at 451 Research. “The network is an ideal place to identify active computing devices and call out threats as they attempt to probe and communicate. ExtraHop Reveal(x) balances real-time critical asset insights with machine learning-based network traffic analytics to create visibility that will help security teams stay one step ahead of security incidents for those assets that matter most.”

What Partners Are Saying

“There are no silver bullets when it comes to identifying and managing risk within a business information security program. It’s a multidimensional problem that requires reliable sources of insight and best-of-breed technology,” said Tim O’Brien, Director of Security Operations at Trace3. “We are excited to integrate the power of ExtraHop Reveal(x) enterprise visibility and machine learning into our world-class security practice, helping our customers identify and address threats before they affect the business.”

For more information on ExtraHop Reveal(x), check out these additional resources:

Product Availability

ExtraHop Reveal(x) is available now in North America via ExtraHop’s value-added resellers for an annual subscription.

About ExtraHop

ExtraHop is the first place IT turns for insights that transform and secure the digital enterprise. By applying real-time analytics and machine learning to all digital interactions on the network, ExtraHop delivers instant and accurate insights that help IT improve security, performance, and the digital experience. Just ask the hundreds of global ExtraHop customers, including Sony, Lockheed Martin, Microsoft, Adobe, and Google.

Standard

Upwards and onwards – Sintrex internship graduates looking back

One core aspect of the Sintrex culture is empowering employees.

The Sintrex Internship Programme not only upskills IT graduates but it also gives them insight into a professional IT environment, where they can learn and explore what it means to be an IT engineer.

After another successful intern cycle, we decided to explore what Sintrex staff (formerly interns)  had learned.

Employees learned the importance of hard work, prioritisation, time management, teamwork, perseverance, persistence and how to face new challenges.

“While I have learned many things in the year I interned for Sintrex, the main lesson has been that by trial-and-error we learn and grow – anything can be done, if we are determined enough to get it right and learn from our mistakes.

These lessons extended beyond mere work, as interns reported becoming more patient, understanding, organised and more balanced in work and life.

“As an introvert, I have developed social skills and became more social with everyone I’m working with, and the new people I meet.

“Prior experience does not determine who or what you are.

“It is all about your ability to adapt to the new situation that has been presented to you within the structure of the business, and sometimes the hardest lesson that one needs to learn is not to be a slave to conformity but to re-invent oneself to the task and opportunities that have been given to you.

Getting a head start in the IT industry

The Sintrex internship programme was launched in 2016 with the goal of creating a talent pipeline of potential employees, either for Sintrex or other ICT companies in Africa.

The interns report that Sintrex’s programme “just felt right” to them:

“I applied for the internship because it was a great opportunity to learn and grow in an IT environment; this is a one in a million opportunity, and I would not say no to a career-changing move.

They said that the programme is well-constructed and executed, and offered a professional working environment, as well as opportunity for career growth.

Some were informed about the opportunity from friends who worked at Sintrex and had experienced the advantages of the internship first-hand.

“I was happy; I decided to apply for the internship, as my friend suggested, because I’m doing the work that I always wanted to do, while learning every day and loving the work I do even more.

Of the best experiences of the internship, many of the interns commented on the great company culture, saying that it is great to “work with such an awesome diverse group of people”, and to “enjoy a braai with colleagues that you can call friends.”

“You get to socialise on a casual level with everyone in the company, even the CEO… Not many companies offer that.

Working at Sintrex

The social structure of the company allows for better teamwork, the interns reported.

“It is easier to understand one another on a professional level, if you have a personal understanding of how everyone in your team works.”

While work is fast-paced, it is also fun and offers an environment to learn and grow, with many senior staff happy to provide guidance.

“Every single environment you work in within Sintrex, will always have the best personalities to learn from, and there is a type of family bond that you start to grow with the colleagues around you.

“Sintrex is a very professional company and will always treat one another with respect and dignity.

All new staff are looking forward to their careers at Sintrex, saying that they expect to grow, both in their careers and in their personal lives.

“I look forward to a career where I can continue to study, with the freedom to explore my options of what interests me most in IT.

Interested in a Sintrex internship?

For those interested in a Sintrex internship, the graduates affirm that “if you are interested, you cannot go wrong – if this is your passion and interest, Sintrex is the perfect place to start your career and learn the ropes in a corporate and professional environment.”

The interns were impressed by how much Sintrex dedicates to them, saying, “They truly invest in one’s career.”

“Do not even second-guess your decision to apply for the internship at Sintrex, as it will give you more than you ever expected.

“To become part of the Sintrex team is rewarding with the social events and all-round atmosphere within the company.

1 2
Sintrex