1200 words, 5 minutes.
“The first virtue in a soldier is endurance of fatigue.” - Napoleon Bonaparte.
This is part 4 of 6 in a short series of posts on winning systems for Information Security practitioners. It aims to plug the gap between policy and products and put you, the practitioner, back in the driving seat. After all if you don’t know what system you’re implementing, how can you decide what products or features are important to you? How can you evaluate what they might be worth?
Congratulations. You’re prepared. You’ve turned the odds around though rigorous application of default-deny. You’ve implemented some responsiveness. When your assets come under attack they automatically block/ban unwelcome connections, permanently or temporarily. What happens when you must permit access? When you can’t ban, or you don’t have clear cause to ban? What happens when your potentially vulnerable applications come into direct contact with a bad guy? This is where our next system comes in. Robustness.
Something which is robust is said to be able to withstand pressures and stress without fundamentally changing its behaviour. To continue the military theme, a soldier should remain effective whether he is tired, up to his waist in a swamp, or in the heat of the desert. Having the property of robustness is the result of anticipation of stress and having countermeasures prepared. Weapons which work in tough environments because they are designed that way. Knowing how to look after oneself physically when under adverse conditions.
In Information Security robustness is vital because the resources an attacker requires to generate stress are almost free. Even when considering the case of Distributed Denial Of Service (DDoS), bandwidth is “free” because it’s stolen. Your services had better be robust because just one attacker with one laptop can put them under immense stress. Today an attacker can take advantage of pay-by-the-hour cloud computing or rented botnets to induce huge stress from a smartphone in his pocket.
Consider the following forms of attack which rely upon a lack of robustness in applications, Operating Systems, or hardware.
- Password cracking, particularly if done offline.
- Stack smashing.
- Race condition exploitation.
- Row hammer attacks.
- Encryption cracking by cryptanalysis or less sophisticated means.
- CPU cache snooping.
- Many sidechannel attacks.
These attacks have an element of trial and error in-common. Today an attacker can afford a few million CPU cycles to get a result. These attacks illustrate that under stress (more stress than their designers envisaged) the software and hardware no-longer continues to perform its function. In the cases above, the function may be privacy (encryption cracking), or separation (cache snooping), privilege restriction (race condition). It may have looked robust when it was designed, but things change. Robust designs and implementations can resist stress without failing. The very best will degrade gracefully as stress increases beyond a critical point.
What does any of this mean for an Information Security practitioner?
- It means where we have access to an OS feature which increases robustness, we should enable it.
- It means where we have the option of adding robustness to an application we should add it.
- It means when selecting an application from a list of alternatives, we should look for evidence of robustness.
These points deserve a little more explanation.
1. OS Robustness
Over time, robustness has been added to all popular Operating Systems. While a robustness retro-fit adds complexity (complexity is normally the enemy of security) on balance it’s better to buy some robustness even if it means adding a little complexity. You can take advantage of “robustness upgrades” by knowing what they are, making sure you are patched or updated, and configuring them where they require configuration (they may not be active by default).
Examples of added robustness in Operating Systems include;
- Address space layout randomization (ASLR).
- Windows Data Execution Prevention (DEP).
- Linux grsecurity/PaX.
- Linux seccomp, used by Chrome, SSH, Docker.
2. Binary Robustness
Most of us now install applications from pre-compiled packages. Did you know that there are a number of compile-time options which increase robustness of binaries? Do you know if the packages you rely upon, those which are directly exposed to hackers, have compile-time protections enabled? If you are comfortable rolling your own packages and compiling them from source, you should investigate these compile-time protections. Perhaps they are already enabled for the binaries you use, perhaps not.
3. Application Robustness
How do you spot robustness when you aren’t qualified to review an application’s design or source code? The short answer is you look for prior widespread vulnerabilities. You look for vulnerabilities which have impacted an entire class of applications. You look for cases where one application in that class has miraculously avoided vulnerability, especially when it suffered from the same class-wide bug as the rest. Here is your evidence that those behind the application had anticipated stress, and had countermeasures prepared. That application is robust. Give serious consideration to using it in place of the alternatives. Security professionals are naturally sceptical types. So I’m guessing you’ll want an example.
- Djbdns implements the exact same DNS protocols all DNS do. Yet it wasn’t vulnerable to cache poisoning attacks which affected everyone else. Why? Robustness. I’m familiar with Daniel J. Berstein’s software. I can tell robustness is something he thinks about from the very beginning, not as an add-on. There are plenty of other examples of robust software. Mail servers, web servers, SSL functionality. Find them. Remember them. Use them.
When all else fails it will be robustness which saves you from the next zero-day vulnerability. Without it you will be in a race against time to patch, upgrade, obtain attack signatures, or change Firewall rules.
Proactively Testing For Robustness
Mature organisations not only look for robustness when they first select software, they test for it. Continuously. Not just on a per-application basis but company-wide. Netflix are known to test for robustness for availability reasons (details of their security operations are not public). We can learn something from the way Netflix approach the availability problem. Read about their simian army. What would such an army look like if it were employed to test for security robustness?
- It would enumerate internal & external interfaces.
- It would find the web apps, the APIs, the open ports.
- It would test those potential weak points.
- Vulnerability scanning.
- Protocol fuzzing.
- Privilege escalation.
- Password guessing.
- Encryption cracking.
- It would do these things as your security measures were selectively deactivated.
Are you confident your IT could withstand such potentially destructive testing? If a security-centric version of Netflix Chaosmonkey were turned loose, what would it reveal? How many of your security measures would fail a robustness test?
Don’t forget you should be operating under a default-deny regime, and banning anyone who outs themselves as malicious at the earliest opportunity. If those systems have not yet eliminated the threat then robustness will be waiting for the would-be attacker.
When Robustness Isn’t Enough
What then? What happens when even robust systems are defeated? What happens if the attacker somehow passes default-deny and avoids responsive blocking? What happens if he is devious or persistent enough to defeat even robust software? When robust software fails (or when fragile software is used) how can we still win?
If you’ve ensured all exposed software and services are robust, then it’s time to think the unthinkable and embrace the next winning system. Resilience.
Does choosing software for robustness, and throwing out fragile alternatives sound like too much work? You’d better have a system for saving your neck and addressing inevitable failures. I know a good one.