Rabbit_1990 - Adobe Stock

How Gigamon is making its mark in deep observability

Gigamon CEO Shane Buckley talks up the company’s ability to inspect encrypted network traffic for malicious activity, how it stands out with its deep observability capabilities and the tailwinds that are fuelling its growth

Encrypted communications, while intended for security and privacy, is fast becoming a hiding place for cyber criminals, with as much as 95% of malware lurking behind secure sockets layer and transport layer security (SSL/TLS) encryption used by secured websites, according to WatchGuard researchers.

Yet, a recent study by Gigamon found that over 70% of global IT and security leaders don’t inspect encrypted data flowing across their networks, leaving malware threats potentially undetected by their arsenal of security and monitoring tools.

Gigamon, known for its network visibility capabilities, raised the ante recently when it found a novel way to inspect encrypted network traffic by using a kernel element in the Linux operating system while reducing the volume of traffic in network flows by up to 96%. It calls this deep observability.

During a recent visit to Singapore, Gigamon CEO Shane Buckley spoke to Computer Weekly about the company’s deep observability capabilities, how it stands out in the observability space and the tailwinds that are driving its growth globally, including in the Asia-Pacific region.

Tell us more about Gigamon and how it’s different from others in the observability space.

Buckley: We’ve made a big transformation in our business over the past four to five years and it’s not hyperbole to say that we protect the largest and most complex networks in the world, including those of governments. For years, we’ve been known as the visibility company and so our role for nearly 20 years was to ensure we could extract network packets out of physical datacentres, like NetFlow data, and reduce traffic using data reduction and deduplication technologies.

But in the past four years, we made a massive shift towards hybrid cloud. As more workloads shift from datacentres to public cloud, and now coming back to on-premise and colocation environments, you can’t get that network visibility because you can’t put a physical tap inside Amazon Web Services (AWS), Microsoft Azure, VMware or a Kubernetes container, which means you lose all that context of what’s happening at the network level.

Up to now, the only way to capture what was happening in applications in virtual or hybrid cloud workloads was to use log-based information and send it to a security information and event management [SIEM] system like Splunk for analysis. But logs are known to be insecure because a log is a file that an application writes and sends to a server. When a nefarious actor attacks your network, they could have altered or overwritten old logs with updated dates and timestamps.

We thought since we already protect all these large networks and if we could use the same techniques to extract traffic from workloads and optimise it in the same way as we do inside datacentres, then it could be a huge benefit.

Our role in zero trust is foundational, and that is, you need to make sure you have no blind spots in your infrastructure, specifically in east-west traffic where there are tons of blind spots today and you also need to be able to see inside encrypted traffic
Shane Buckley, Gigamon

Now, when you send a full stream of network traffic from AWS to on-premise environments, it’s very expensive and takes tons of resources. We figured out that we could reduce the volume of traffic in these network flows by up to 96% by extracting metadata directly from network traffic.

That whole application stack now runs inside our cloud suite of products, enabling us to do deep observability, which is network visibility plus metadata extraction, SSL decryption, and our precryption technology that can uncover threat actor lateral activity concealed within encrypted communications.

At the same time, you have the flexibility to use a tool like Dynatrace that sits in the public cloud. We can send traffic from the datacentre to the cloud in a very efficient way so your NetOps, APM [application performance management], security or observability tools will continue to work in exactly the same way. CISOs [chief information security officers] can ensure the blast radius is not increasing dramatically, because they get the same level of telemetry as what they’d get in the physical datacentre.

How would your deep observability capabilities complement a customer’s existing investments in a log-based platform?

Buckley: Before cloud was even invented, we were already providing critical telemetry to SIEMs and other log-based infrastructure. You need both log-based information and network-based telemetry. You can’t just rely on logs, because if you look at the hacks that happened with MGM and Caesars Palace recently, there was complete log-based visibility.

The reason it didn’t work was because the hackers had privileged access and were able to sidestep the log infrastructure while they created command and control points to exfiltrate data. Network telemetry could have revealed traffic flows from one server to another server that don’t normally talk to each other. For example, if you’ve got a fax server that’s requesting gigabytes of data from an application server, you know that’s not normal. And logs don’t capture any of that. That’s captured on the network.

For years, CISOs and even CIOs have used network telemetry to validate logs. It’s no different in the cloud. The problem with cloud is that because you have a shared security model across multiple domains and users, the blast radius is massive. The opportunity for nefarious actors to interject themselves and take control your network is very high. And so, if you don’t know what’s happening at the network level, you’re never going to catch them.

Are there network or compute overheads that organisations need to be aware of as they use Gigamon in their hybrid infrastructure?

Buckley: That’s a great question. The overheads are practically close to zero. If you use our native Universal Cloud Tap [UCT], the overhead is in the low single-digit range of 2-3%. UCT operates on all cloud environments, including private cloud environments and containers, and gets a layer of information that we normalise and filter in the same way we’ve done for 20 years. Then, we process and optimise the information using our GigaSmart applications.

You mentioned about looking into encrypted traffic – are there any limits to what you can get into?

Buckley: We’ve got the traditional passive inline decryption that you would find in a firewall to look at north-south traffic. Many customers use Gigamon because they want to decrypt the traffic once and use it many times.

With precryption, instead of worrying about the security stack, ciphers and key management, imagine if you could operate inside the workload to capture the traffic before it is encrypted and after it’s decrypted. There’s a technology called eBPF [extended Berkeley Packet Filter], a kernel element in Linux used extensively for things like server performance tests. We managed to figure out a way to use eBPF to extract the packet when it’s in its native form or cleartext. Therefore, all the encryption that sits above us – both application and network security – are completely irrelevant.

Our UCT has eBPF components and with the permissions granted, we’ll extract the traffic into the UCT. Then, it’ll filter traffic based on rules the user has set up, encrypt the traffic and send it directly to us. With our own key management system, we can decrypt the traffic and send it to your firewalls, detection and response products and data lakes.

You can't just rely on logs, because if you look at the hacks that happened with MGM and Caesars Palace recently, there was complete log-based visibility. The reason it didn't work was because the hackers had privileged access and were able to sidestep the log infrastructure while they created command and control points to exfiltrate data
Shane Buckley, Gigamon

And we can send the traffic in a much more simplified format for decryption and still be secure because we’re using full decryption. We have the keys and if the hacker tries to copy a customer database from an application, we’ll see the copy request in cleartext. We’ll send that to an MDR [managed detection and response] tool and it shows up as a red flag.

How are CIOs treading the line between monitoring their networks for malicious encrypted traffic and protecting the privacy of employee communications?

Buckley: That’s a great question. We have the same controls in our product that we’ve had for years. The IT organisation can set policies on what traffic they want to filter. They also have the ability to mask certain data like social security and passport numbers. They can set a flag to say don’t precrypt the traffic from an application, but that could be insecure because that application could be the one hackers would want to exploit. What we can do is we can precrypt the traffic but mask the data, so you understand what’s happening.

Now that we have a good overview of the technology and infrastructure, I’d like to get into the commercial bits. What’s your pricing model and how is the business doing globally and in the Asia-Pacific region?

Buckley: When you buy our software, first of all our UCT is free. We’re partners of AWS and Microsoft Azure, so you can go to their marketplaces to use it. We monetise it through volume-based licencing which is fully transferable. So, if you have a 250TB licence, it’s going to cost you a certain amount and whether you move your workloads from cloud to on-premise and back or not doesn’t matter because the licence is completely transferable at the organisation level. The licence gives you a network pack, an application pack and a security pack that does the security stuff like decryption.

We’ve been growing our annual recurring revenue [ARR] by a compound annual growth rate of 21% over the past five years. We are a Rule of 40 company and that means we take our revenue growth as our profitability which is over 40%. That’s considered to be very positive from a financial perspective.

In Asia, we’re growing 40% this year and we’re excited about the opportunities in the region. We see huge opportunities in North Asia as well as Australia and New Zealand. We have the tailwind from the shift to hybrid cloud, which gets a lot of confusion because people often say cloud is simple, but it’s really complicated. We’re in the right place at the right time with a mousetrap, if you will, that makes things more simple, transparent, straightforward and secure.

The second thing is that more organisations are recognising the importance of being compliant with zero trust. There’s a lot of noise about zero trust and you could speak to 100 vendors that will tell you they make everything zero trust. That, of course, is not true. It takes a combination of vendors to create a zero trust environment, not just one. Our role in zero trust is foundational, and that is, you need to make sure you have no blind spots in your infrastructure, specifically in east-west traffic where there are tons of blind spots today and you need to be able to see inside encrypted traffic. That’s our little role in zero trust, but we’re the only ones that do what we do really well.

Are there any sweet spots that Gigamon plays very well into from an industry perspective?

Buckley: There are quite a few – governments, telco service providers and enterprises. The geopolitical landscape has been quite unstable so there are a lot of investments by governments around the world to shore up their infrastructure. As for service providers, the move to 5G is creating opportunities as we can be embedded in 5G standalone infrastructure. The fact that we can extract metadata eliminates a lot of the inefficiency they had with probes and customised analytics stacks. In the enterprise, we’ve been strong in financial services, utilities, healthcare and transportation.

What about operational technology (OT) networks?

Buckley: It’s a great question. The internet of things [IoT] and OT are really big for us. We’re the easiest way to instrument OT devices. One of our OT partners has deployed Gigamon infrastructure for a big hospital system in the US which thought they had 20,000 devices. After the devices were connected to our infrastructure, they captured all the traffic and found out they actually had 65,000 devices. We also support the Scada protocol, so if you’re looking at command and control of a traditional power utility, we’re the easiest to work with. We’ve partnered with a number of renewable energy companies that have integrated our products into their stacks as well.

Read more about cyber security in APAC

  • The chairman of Ensign InfoSecurity traces the company’s journey and how it is leading the charge in cyber security by doing things differently, investing in R&D and engaging with the wider ecosystem.
  • The president of ST Engineering’s cyber business, outlines the common myths around OT security in a bid to raise awareness of the security challenges confronting OT systems.
  • Australia is spending more than A$2bn to strengthen cyber resilience, improve digital government services and fuel AI adoption, among other areas, in its latest budget.
  • Mimecast CEO Peter Bauer believes the company’s comprehensive approach towards email security has enabled it to remain relevant to customers for two decades.

Read more on Network security management