Whack-a-Mole – Wanna Cry!!

WannaCry was bad news. It crippled hundreds of thousands of hosts. Patients and hospitals couldn’t retrieve records. Automobile manufacturers stopped production. Even now, the criminals continue to collect cash via Bitcoin accounts. WannaCry just might be the wake up call the industry needs. But will it really change anything? Or are we just playing Whack-a-Mole with every new disaster?

Unfortunately, the evidence suggests we’re trapped in a perpetual carnival game. Our collective failure to fix these problems won’t be for lack of effort – patches will be made, ports will be plugged, and policies will be implemented. Talented people will do the best they can, as fast as they can.

So why are we struggling? Two reasons: First, the bad guys are really agile. Second, we continue to rely on rules and policies to protect critical infrastructure. We saw both dynamics in action during the WannaCry breaches.

The cybercriminals behind WannaCry were agile

Today’s software developers use automated, agile development models that dramatically reduce release cycle times. Hackers, unfortunately, have adapted to that model faster than the security industry.  They’re able to refactor malware faster than we can create signatures or rules to fight it.

The first variant of WannaCry had a “kill switch” that stopped the attack if the malware couldn’t reach a specific domain name. A security researcher, after realizing the domain name wasn’t registered, registered it and stopped that variant’s propagation. Within a few hours, new variants appeared – this time without the kill switch.

WannaCry shows how fast the cybercriminals can change and adapt to keep their attacks lethal. As security professionals struggle to keep up, each new attack variant essentially becomes a new day-zero exploit that must be investigated and understood. By the time you’re ready to take a whack at the next mole, he’s already gone…

WannaCry got past readily-available patches and best practices

We see, again, the continued failure of rule and policy-based solutions to stop devastating attacks. WannaCry was enabled by two NSA tools that had been in the public domain for months:

  • EternalBlue takes advantage of a known Windows vulnerability found in SMB file-sharing services. Fortunately, this vulnerability was patched by Microsoft earlier this year.
  • EternalBlue installs an NSA-sourced backdoor called DoublePulsar. Good news! DoublePulsar used port 445 which, according to best practices, should be blocked from inbound external connections in most organizations.

So we’re done here, right? Just keep up with Microsoft’s patches and watch your network hygiene and it can’t possibly happen to you. Right?

Wrong. The problem is not your IT team’s devotion or talent. It’s not oblivious users who click on random email attachments. It’s not Microsoft’s fault. It’s not even the NSA’s fault. As long as we rely on rules, policies, and patches to protect us we’re going to keep playing Whack-a-Mole. Users will click on phishing emails. Harried network engineers will open that one port someone’s app just HAS to have. Systems won’t get that essential Patch Tuesday fix.

The lesson we’ve learned from WannaCry – and every other recent sensational security story – is that current methods and approaches have limits. WannaCry’s is dramatic proof that patches and best practices aren’t enough. By now it should be clear: we need to think about security in a different way.

The answer is a technique called baselining. Systems will always have vulnerabilities and cybercriminals may always be more agile than us, but to complete an exploit they need to break some eggs and do something out of the ordinary. If we can spot these anomalies we’ve found the key to a new security model.

Let’s take a closer look at WannaCry. As the exploit made its way down the cyber kill chain it did a number of unusual things. A security tool trained to look for behavioral anomalies from the cloud or data center’s baseline would easily spot any of these giveaways:

Kill Chain Phase: Initial Exploit

Anomaly: Incoming Application Connections

WannaCry made inbound connections on port 445, 137, or 138 to get to Server Message Block (SMB) services. Vulnerable unpatched Microsoft installations were compromised.

These incoming connections were anomalies in most impacted organizations. Any new connection to a machine type or application that does not normally talk on these ports is an anomaly – even if the ports were intentionally opened.

Kill Chain Phase: Arming and Weaponization

Anomaly: Application Launch

WannaCry executes remote code on the targeted machine to gain access to the host and install the ransomware module.

A binary that launches another as-yet-unseen binary on a production host is an anomaly. We don’t need rules or policies to know this is a big red flag.

Kill Chain Phase: Escalate Privileges

Anomaly: User Executing Commands with Escalated Privileges

WannaCry changed privileges to a higher level to evade policies that prevented remote code execution. Security professionals are all too familiar with the hazards of overly-permissive user authorizations. They also know how easy it is to make this mistake.

Any new privilege change inside the DC/Cloud is an anomaly and should always be investigated.

Kill Chain Phase: Connection to Command and Control

Anomaly: Connection to New Host

WannaCry next connected to specific TOR sites to “phone home” to the cyber criminal’s C&C.

Connections to TOR sites are anomalies in most cloud or data center environments. These new external connections indicate suspicious activity worthy of a closer look.

Kill Chain Phase: Lateral Movement

Anomaly:  New Internal Connection

Once installed on the system, WannaCry looked for other unpatched Microsoft hosts on the same subnet.  It also checked random external IP addresses for similar vulnerabilities.

The lateral movement shows up as an anomaly as these are new connections between internal applications. New outbound connections from an application that doesn’t normally make such attempts is another suspicious event.

Security systems based on continuous baselining of application, machine, connection, and user behavior would have been able to detect the WannaCry ransomware threat at a number of stops along the kill chain. Baselining will, in fact, catch any threat that is a deviation from normal behavior. The only way to successfully penetrate a system protected by baseline monitoring is to break in while keeping everything exactly the same as before. This is a very tall order, as hackers will surely trip on one behavior or another.

Categories

Suggested for you