SentinelOne: sealing the cracks in the modern development stack

Developers love software application development security provisioning, penetration testing, procedural process management and (software) program protection planning in all its forms.

Okay, so what’s wrong with the above statement?

Spoiler alert: they (developers) generally don’t i.e. there is typically a far more fervent drive to work on features, upgrades, enhancements, extensions, augmentations and anything that will drive an application towards delivering greater user functionality payloads, in buckets.

Security meanwhile, is sometimes left as an afterthought.

Head of innovation at SentinelOne Caleb Fenton thinks this is a real shame, if not a travesty. Well, he would, SentinelOne is a security research and development company where Fenton and team spend their time his team analysing threats, malware and anomalies, while they also map networks, find vulnerabilities and so on.

So what does Fenton think developers should keep in mind when it comes to security at the programming level?

“As an app developer, I would hope that you would generally be interested in security and trends. For security, we do way more than detect malware and threats, we’re trying to minimise risk and automate decision making so you can focus on the complex stuff. To that end, we’re building up our cloud security offerings to protect your docker containers, cloud workloads and understand your cloud configuration and security posture,” said Fenton.

Head (& code) in the clouds

SentinelOne’s Caleb Fenton: Keep coding, but embrace behavioural-based detection.

He points out that every app developer knows that there’s a good chance their code is going to run in the cloud, so the more we ALL understand how that cloud works, the more secure we can make your apps.

“While I think there’s a lot that can be done to improve security at the tooling layer… and there’s really cool developments in AI-based code generation and bug detection, SentinelOne is focusing on platform security i.e. making sure nothing malicious is happening on the endpoint, container and more,” said Fenton.

To provide an example here, he suggests we say a developer doesn’t initially care about Docker, Kubernetes etc. (because that’s DevOp’s job, right?), but they should realise that in that case, they’re a prime target for more sophisticated attackers. 

We need to consider how easy it is to create a malicious plugin for vscode or sublime. Devs are busy trying to get stuff done and if they see a plugin that sounds like it *might* do what they want, many will install it without a second thought. As an attacker, you may only need to compromise a single GitHub account to infect thousands of developer machines, or maybe just one of the libraries used by a plugin. 

These supply chain types of attacks are impossible to effectively block with signatures – and the answer (in the SentinelOne world at least) is behavioural-based detection.

The modern stack is stacked

“If you *do* care about how your code is deployed, you’ll want visibility into any abnormalities. Is one of my containers behaving unlike other containers based on the same image? Opening thousands of connections or uploading gigs of data at 2am on a Saturday when other containers are quiet? Maybe the app was exploited. Did my helm chart accidentally create an externally facing load balancer to an internal service with privileged access to our database? The problem is that the modern deployment stack has so much configuration and so many moving pieces that it’s hard for app devs to keep it all straight in their heads. You need intelligent systems monitoring what’s happening and not generating so many alerts that you start ignoring them,” Fenton suggests.

On the trends side, a lot of people talk about AI, obviously. SentinelOne is using AI to collect telemetry from customer endpoints and networks (as well as threat intelligence feeds) and pipe it all to its data lake.

Fenton and team believe that customers should know what the SentinelOne process is and what it involvces. The market still seems fairly unsure how to evaluate AI and they don’t know what questions to ask.  “I’ve heard about neural networks. Are you using those?” comes up a lot he says.

It’s raining training algorithms 

“I always push to be totally open about which training algorithms we use, our data acquisition and training pipeline, most of our features, the size and variety of our training set, how we evaluate models and so on. None if this is that special. There are videos on youtube on ‘how to create an anti-virus in X minutes’ and they’re legit. The real secret sauce is creating and curating a large, clean, unbiased, highly varied and constantly growing training data set and extracting really informative features,” explained Fenton.

But it must be tough to bring some of the new AI-enriched security tools to market, right… especially given the fact that every developer will already know the basics of AI and think that they know what will work and what won’t… and, at the extreme level, some developers will eat quantum qubits for breakfast.

“Put it this way: the more you know, the more valuable you are. I’m ‘old enough’ to remember when you bought software, installed it to your machine and used it locally. Now, if you don’t know what a RESTful API is, you’re living in a cave. You won’t need to know exactly how all the training algorithms work because that will be handled by rapidly maturing libraries,” said Fenton.

I’m going in deep, give me (code) cover

He rounds up by saying that more advanced developers will need to understand the theory so they can go off the beaten path and build their own models i.e. knowing sklearn is tablestakes, but you’ll want to know TensorFlow, Keras, or PyTorch if you really want to go deep. 

“In the past, devs wrote a function, applied it to data and got labels. Now, programmers have data and labels and machine learning creates the function. This means you have to get very good at cleaning and preparing data, engineering informative features, getting clean labels, evaluating the quality of a data set, evaluating model performance, designing and performing experiments, tuning hyperparameters of different models. All of these problems are easy to say but take a long time to learn because almost everything must be learned from experimentation. Intuition and experience can save time, but very often there’s no way to know something until you try,” Fenton concludes.

The message here is, security will love you (dear developer) and users will love your security just that little bit more if you work to love security controls as part of your wider approach to software application development and code architecture… now, that wasn’t so hard was it?

Approved image use: SentinelOne