Prazis - Fotolia
Hunters: a rare but essential breed of enterprise cyber defenders
They wait, they watch, they search the outer reaches of networks and the darkest corners of the web, setting traps, crafting tools, collecting evidence and going in pursuit: they are the hunters
Proactive security is taking on a whole new meaning with the emergence of a rare breed of information security analysts who sniff out traces of cyber attackers and go in pursuit, relentlessly tracking and hunting down their quarry.
Information security leaders agree that the days of relying on security system alerts to scramble first responders to cyber attacks are past.
Today, defenders need to be less reliant on systems based on known attacks and more proactive, finding malicious activity and vulnerabilities before any harm can be done, thinking like attackers and blocking avenues of attack before they can be used.
In the face of increasing volumes of attacks, defenders need technologies that take care of the bulk of the low-level stuff so that they can concentrate on those slipping through the net.
It is these below-the-radar attacks that are potentially the most dangerous and the most persistent, giving cyber criminals the possibility of unfettered access to mission-critical data assets for months and even years.
Like dedicated law enforcers, hunters are typically the most enthusiastic, passionate and driven security analysts. They enjoy investigating and are not waiting for the alerts or emergency calls to come in. They run where they know attackers run, they listen where they know attackers communicate, and they watch where they know attackers are likely to be testing the boundaries and trying their luck.
Read more about behaviour analytics
- Behavioural analysis could have prevented Salesforce.com employee inadvertently handing over access to customer database.
- There is a growing trend in the industry towards merging big data and security.
- Many logs are generated and then ignored as resources to review and analyse them in a timely and useful manner are lacking.
- Deploying a successful network behavioural analysis project begins with co-operation and involvement from many teams in your organisation.
Some hunters, like those who work in the RSA FraudAction division take it even further by pretending to be hackers themselves to get inside the heads of the cyber criminals and learn their techniques. They are long-standing members of hacker forums, talking directly to hackers – their intended prey. This is proactive security in the extreme, and typically the preserve of only the most dedicated seekers of the truth.
Out of the ordinary
But at the most basic level, where there is no known or obvious intrusion, hunters are looking for abnormal, unusual or suspicious behaviour, especially in relation to high-value data assets, wherever there is risk and attackers may be active.
“Hunters typically look at all processes, tools, commands and network file shares that are running in an environment to pick up potential indicators of compromise that security systems would miss because they are not malicious in themselves, but a trained eye can recognise if something is inappropriate, unlikely or unusual, which can signal that something is amiss,” says Ben Johnson, chief security strategist for Carbon Black.
“Where there is a known compromise, hunters have a role to play in finding out if attackers are active elsewhere in an organisation or doing anything else that may not be obviously connected to the known compromise by looking for related or similar behaviour and identifiers such as IP addresses, system commands and command formats or styles,” he says.
But before hunters can do anything, they have to be able to see what is going on. That means organisations have to do everything they can to have the highest level of visibility of their IT environment, including the network and all endpoints, through things like effective logging and continuous activity monitoring. Despite the usefulness of these tools to investigators, most companies have yet to invest in this capability.
Read more about continuous monitoring
- A continuous monitoring program can improve everything from configuration and patch management to event monitoring and incident response.
- Dave Shackleford brings you up to speed on what you need to know about continuous monitoring for network security.
- How to implement a continuous monitoring strategy with existing tools and help from suppliers focused on this area.
“Once the appropriate information is collected, it needs to be centralised using something like Splunk, Hadoop or a traditional SIEM [security information and event management] system like QRadar or ArcSight, and then it’s a case of correlating that information and doing some simple analytics by running queries to look at all the users that have only logged in once, or who log in mainly at night or very early in the morning,” says Johnson.
Other typical hunter weapons include operating system logs, network packet capture systems, endpoint detection and response tools to provide data that can be analysed centrally to find what is unusual or rare. It’s more efficient than the more manual approaches – such as memory analysis of every machine on the network – that security analysts would typically have used in the past.
Ben Johnson, Carbon Black
Despite the potential value of this approach and the fact that just about every organisation recognises the need to do it, Johnson says it is still extremely rare. “It is not a matter of convincing them,” he says. “It is a matter of security teams not really knowing what they could be doing and the lack of human resources to do it.
“Many organisations have the logs, but they are just not looking at them or are not looking at them in the right way.”
The organisations that are adopting the hunter approach tend to be those that have developers and engineers involved in security, such as technology companies and large financial institutions where they are able to write code to collect the data, run queries to strip out the noise and present it to hunter analysts.
Johnson says: “If you think about it, that is really how intelligence agencies like the NSA and GCHQ work: get as many sensors as you can out there to collect as much information as you can, centralise it in huge data systems, and then use computational power to put only the enriched, relevant data in front of the human analysts.”
Hunting at scale
This approach, using automation and coding to sift through multiple sources of data and to execute responses, offers a hunting approach at scale and allows security teams to be smaller and more dynamic.
“It is easy to start building up hunting skills by using something like Splunk to collect where user logins are happening, and then looking at who is logging in,” says Johnson, “but only rarely, who is looking into multiple systems at the same time, and other simple queries that can help catch malicious activity that would otherwise have been missed.”
For example, where attackers steal valid user credentials, it is difficult to detect them because there is no malware or other hacker code. However, Johnson says that when this happened in a Fortune 50 company, by collecting everything running on every endpoint and centralising it, it was possible to identify the compromised computers by tracking commands that would be unknown to normal users.
“Once you find hunting algorithms that work in your environment, automate these and then go on to the next thing, continually raising the bar for attackers, making it increasingly difficult to operate in your environment without exposing themselves by continually adding Mission Impossible-style laser beams or tripwires,” says Johnson.
Building their own tools
Dedicated hunters, however, go beyond the capabilities of existing tools and technologies, using more art than science to push the boundaries of what is possible by coding new tools on the fly to discover what was formerly undiscoverable.
And this is where cyber hunting reveals itself as a truly dark art, with most hunters unwilling to shed much light on how they practise their craft. This is partly to avoid tipping off attackers about how they can see the bad in what appears to be normal and good, partly down to professional jealousy, and partly a result of the non-disclosure agreements (NDAs) that keep the best tales of cyber sleuthing out of the public domain and which have stopped several hunters from sharing their finest moments.
“Details of our methods must remain secret to be effective,” says Andrew Nanson, chief technology officer of Corvid, the high-end internal and external security services division of the Ultra Electronics Group, and former former cyber security adviser to Nato and the UK’s intelligence and defence agencies.
Andrew Nanson, Corvid
Hunting is so much more than behaviour analytics and anomaly detection, he says. “Attackers know companies are going to be doing that, so they are either patient or just ensure that all their actions look like normal user actions and use applications that are valid user applications. Attackers are not stupid.”
According to Nanson, searching for anomalies is the easy bit. “Hunters will also look at what appears to be ‘normal’ because that is where you will find the attackers – they operate in the grey space. If you are not getting at least 50% false positives on your investigations, you are not investigating the grey space where attackers often operate.”
If organisations investigate only anomalies and incidents, then skilled attackers are almost certainly going to be able to operate in their networks undetected, Nanson warns.
Hunters, he says, start with the assumption that all systems are compromised. But because there are various levels of sophistication in compromise, the first thing is to ensure the system in question is not compromised by obvious malware that is well known and easy to detect.
“If an organisation has Conficker on its systems, then that’s a bad situation,” he says. “It means the systems are unpatched and the antivirus software is not up to date, and that is just not acceptable, even though Conficker was brilliant.”
Nanson says that Conficker was the first to use a domain generation algorithm (DGA) to call home and get control instructions.
All bad until proved good
The next step is to analyse all the systems to ascertain what is on all of them.
“What we don’t know, we assume is bad until we know that it isn’t,” Nanson explains. “Instead of looking for a signature of ‘bad’ we assume that the best you can have is a signature of ‘good’, and even then you shouldn’t trust that because attackers can still inject malcode into legitimate processes using a technique known as ‘process hollowing’ and get these processes to hide and run the malicious code, making it very, very difficult to identify that something that is running is bad.”
For this reason, the essential next step is to do memory analytics to identify if any of the benign-looking processes running on a machine are actually malicious.
“Generically speaking, we analyse at the network level, at the host level and at a metadata level, applying our intelligence to as many different places as we can in the IT environment, but without giving the IT department extra jobs to do, such as blocking IP addresses that happen to have scanned the organisation’s firewall, which is not very helpful,” says Nanson.
“Doomed to failure”
A purely product-based approach to security, he says, is doomed to failure. “If I were to come up with a product today, then by the time I have got finance for it, developed it and tested it, and released it to the market, the attacks would have moved0 on.
“Instead, you need a continually evolving platform of capability and to be as agile as the attackers, because if you are not, you will always be waiting for someone else to develop a product to defend against the current attack vector.
“This is why you need good people, not just people who can drive a GUI because all they can do is what a software developer came up with them to do. You’ve got to have people who can work at a raw data and content level and know how to create new methods of detection, or at least hypothesise a new compromise vector that you have not considered previously.”
Hunting, says Nanson, is the most effective way of detecting compromise because if organisations hunt instead of “sitting back and waiting for something to advertise that an attack is under way”, then they are being proactive rather than reactive, potentially reducing the window of opportunity for attackers.
This is so important, he says, especially in the light of reports that the average time that attackers spend inside corporate networks before they are discovered is around seven and half months. Ideally, organisations should be working to reduce this window of opportunity to days and even hours.
“In the absence of a security product that guarantees finding attackers 100% of the time, that stops them in their tracks, and can tell you where they are and how they got in, you need a constantly evolving, proactive hunting approach whereby you are constantly looking at what is going on and constantly devising new techniques of detecting compromise,” says Nanson.
Refusing to be drawn on just how this is done and how hunters tell the good stuff from the bad stuff, all he will say on the topic is that “attackers do make mistakes”. But he adds that hunters cannot rely on that, and admits that sometimes it is simply a question of luck.
IOC tunnel vision
Attackers also all follow the same high-level script, says Jared Myers, advisory practice consultant at RSA and one of RSA’s hunters for hire. “Attackers have to get in some way, then they need to obtain credentials and start moving around to target the data, and then they have to get that data out, so that is where we focus our attention rather than becoming caught up in chasing IOCs [indicators of compromise].” He adds that this approach has proved useful in establishing the scope of an intrusion and finding things that targeted organisations were not yet aware of.
Although malware changes rapidly, some components like communication protocols do not change that much or that often, so focusing on such elements can be useful, says Myers, although it typically requires some reverse engineering and decryption.
Connecting the dots
Also, being able to connect the dots to find where and how attackers get into systems and networks, he says, is important in preventing attackers from coming back in the same way.
In connecting the dots, it is sometimes the strangest things – such as a shutdown taking fractionally longer than usual – that can provide the vital tip-off, according to Nanson. However, he reiterates the importance of good processes, constant innovation around detection techniques, and constant monitoring of systems and analysis of security data, communications and connections.
“Hunters need to ask continually what is the thing they can’t detect, and then work out how to detect that,” he says. “Consequently we spend a lot of time testing and validating new methods of detecting compromise.
“How would I possibly identify file-less malware taking advantage of zero-day vulnerability in Silverlight? is an example of a typical question a hunter might ask, as well as: Are there any artefacts left on the box? and: Are there any indicators of compromise that we can start looking for?
“Being able to come up with and test a theory quickly and then refine and validate is really important for any cyber defence team because it gives you the chance of coming up with a technique to detect, and hunting is all about producing new detection techniques,” says Nanson, giving the sense that he may have said more than he intended to.
The human dimension
While a growing number of organisations recognise the need to be more proactive in their approach to security, most lack the necessary human resources even to pursue hunting at the most rudimentary level, which in many cases may be all that is necessary, while in other cases, outsourcing the hunting function to suppliers with state-of-the-art capabilities would make more sense.
“Many organisations would not, for example, require someone full-time to reverse-engineer malware, so it would be easier and cheaper to outsource that,” says Myers. “However, if most of the organisations we’ve been called into in the past year had been more proactively hunting, they would have easily halved their exposure time.”
For this reason, more companies are now seeking to have an in-house hunting capability – in some cases to have a specialist on board who can derive and apply insights from company investments in state-of-the-art technology and in other cases to avoid future cyber attacks.
Grow your own hunters
In that capacity, organisations also need to be proactive if they are to succeed, according to Amit Yoran, president of RSA.
In the face of the global shortage of cyber security skills, he is urging organisations to grow their own hunters by providing the right environment for the most creative IT security professionals to flourish, develop and hone their skills.
Hunters may not be the norm currently, but that is changing, and Yoran is one of the security industry figures encouraging the trend.
Amit Yoran, RSA
“If you don’t have hunters, grow them, or at least don’t stand in their way. Let them evolve into the hunters you need,” he said in his keynote presentation at RSA Conference 2016 in San Francisco.
Yoran called on organisations to focus their technology investments on supplementing and enhancing their security teams’ native capabilities to make them smarter, more efficient and more scalable.
“Where organisations are enabling security specialists to work remotely, they are also finding it easier to build in-house hunting capability,” says Myers.
According to Yoran, organisations should invest in technologies that give comprehensive network visibility to enhance rather than replace human creativity and problem-solving, and to create a culture that embraces the smart creative, the free thinker and the curious.
“Free your people to chase the why,” he said. “Allow, train and equip your people to be hunters. Focus on empowering them with the tools that can fuel their curiosity and enable them to find the answers they seek.”
He added that if organisations embrace the freedom to actively hunt adversaries, they will attract the right team, and in doing so, create the right culture.
Follow the renegades
Yoran said that the security industry needs to “wake up” and think as creatively as those who founded the industry.
“Our industry was founded and built by mischievously creative, almost eccentric, pioneering renegades,” he said, challenging all information security professionals to do things differently.
And doing things differently is at the core of hunters-for-hire business Raytheon Foreground Security, which also provides the opportunity for customers to grow their own in-house capability.
The demand for hunter-supported security operations centres is mainly from big government agencies and suppliers of critical infrastructure, says David Amsler, president and CIO at Raytheon Foreground Security.
The hunting concept, he says, was born out of frustration at the reactive, inefficient nature of standard security operations centres.
Before switching to a primarily hunting approach, 90% of Foreground analysts’ time was spent chasing security systems alerts, with only 18% of that time yielding positive results.
“This meant analysts could spend only 10% of their time independently looking for anomalies and other indicators of compromise,” explains Amsler.
Analysts beat technology
The most telling finding of an independent study, he says, is that 100% of the critical threats and 88% of the high-risk threats found were found by an analyst and not directly by any security technology.
“That tells me that the security industry is good at automating the detection of run-of-the-mill stuff, but there is no technology that finds the advanced threat actor,” says Amsler.
As a result, he rejected the “reactionary” methodology driven by signatures, rules and sandboxing that every managed security service and security operations centre was using.
Instead, he developed a hunting-led approach that is behaviour and anomaly-driven and supported by machine learning and technologies that provide full visibility of the IT environment.
Hunter school
Because hunting relies on humans, Foreground set up a training school for hunters, which now offers 94 courses designed to turn anyone into a qualified hunter within a year.
The company has also developed its patented machine learning Automated Threat Intelligence Platform (Atip) to support hunters by talking to a wide range of systems in an IT environment to provide complete visibility.
This switch in focus has had a measurable impact on Foreground’s efficiency, says Amsler. “Now 61% of the analysts’ time results in finding the bad, up from 18%, and we’ve reduced our false positives from 73% down to the low 20s.”
Although Amsler developed the hunter training school to keep Foreground well supplied with skills it needs, its courses are now open to customers who want to train their own hunters.
David Amsler, Foreground Security
“But most companies struggle to find, attract and retain people with hunting capabilities, so we are finding success with a hybrid concept where companies have a core security team that understands the business and its processes, and have some basic hunting capabilities, but who work with our hunters as part of their team as and when required. The more trained eyes you have on the problem, the better because no-one is going to win the war on their own.”
RSA has a similar philosophy. Yoran cites a case where one of the hunters in RSA’s incident response team trained an internal analyst from a customer company, allowing him to shadow the RSA team’s activities to ensure he understood the methodology and process of hunting.
“Over the course of a few months, he became a master analyst, capable of hunting on his own, actively combating sophisticated threat actors interactively, denying them access to systems which would otherwise remain owned for months or years,” he told attendees of RSA Conference 2016.
Amsler believes this is a model that can work for a large part of the market. He and Yoran may be right, as it offers a good compromise between having in-house hunters with state-of-the-art capabilities and relying on a completely outsourced managed security service or security operations centre, but at the same time enables companies to grow their own hunting capability.