Winning Systems & Security Practitioners 7. Attack Surface Reduction

2000 words, 7 1/2 minutes.

Attack Surface Reduction

Heraclitus

Illustrerad verldshistoria utgifven av E. Wallis. volume I. 1875-9.

“Out of every hundred men, ten shouldn’t be there, eighty are just targets”

Heraclitus 535 - 475 BC.

My posts on Winning Systems for Cyber Security Practitioners are my most popular. In them, I attempt to change your perspective on the relative importance of products and skills in securing what’s precious to you. I make the case for using systems (in the broadest sense of the word) not goals. Systems aren’t new. They pre-date the product-era of today. From infrastructure to code, you can count on them. Products and vendors come and go. Skills are irregular and scarce. It is systems that are eternal and universal.

If you’ve taken my advice, you’ve factored default-deny, responsiveness, robustness, and resilience into your thinking. This post is about minimising your attack surface. In it, I explain what this means and why it may be our best hope for reducing vulnerability in practice. Finally, I’ll tell you what the world might look like if we took this winning system to its ultimate logical conclusion. Before we get there let’s begin at the beginning.

The Problem

1: Codebase Size & Quality

According to the Software Engineering Institute, software developed in the USA has about 6000 defects per million lines of source code (MLOC)1 for high-level languages like Java and C#. High-quality code has 600 to 1000 defects per MLOC. Code of exceptional quality developed with the highest levels of assurance exhibits defects below 600 per MLOC. Between 1% and 5% of those defects are thought to be security vulnerabilities2. Let’s apply that research to some well-known codebases and see what the numbers look like.

Software MLOC Vulns. (high est.) Vulns. (low est.)
FreeBSD 9 2700 54
Oracle3 25 4500 90
Windows 50 9000 180
Facebook 65 19500 390

The low estimates don’t look bad. There’s no chance they apply here. Even the high rate may be too conservative for contemporary Internet-based applications. Most technology companies optimise for the shortest time to market, because they have competition. They optimise for flexibility, because they don’t know what they’re building until they find product/market fit. Security, stability, and ease of maintenance (the conventional measures of software quality) are not high on the list of priorities.

2: Tractability & Distributedness

MLOC is a simplistic measure. Modern Internet applications consist of a mixture of software from a long and complex supply chain. Partly Open Source. Partly licensed. Partly offshore. Partly of unknown origin. Some code recent. Some ancient. Some developed with great care. Some slapdash. Modern web applications draw upon software in the form of services from a dozen places. Your application may be partly distributed even though you think of it as centralised.

Consider this blog. It has a web server and an Operating System. It uses cosmetic client-side javascript. It has an external comment section. Fonts and cookies come from Google. I should say something about SSL and DNS services. Without these, the site wouldn’t function. All this attack surface for some static text that looks much as it would have in the 1980s. Its puny code is distributed and diverse. It doesn’t even offer an application in the conventional sense. The simplest things have a larger attack surface than you think. Some numbers:

Software MLOC Vulns. (high est.) Vulns. (low est.) CVEs To Date
OpenSSHd 0.020 2 0 2
NGINX 0.165 50 1 23
OpenSSL4 0.536 150 3 194
LibreSSL 0.344 104 2 7

For OpenSSL nobody believes 194 CVEs represent anything other than a small fraction of the actual vulnerabilities. OpenSSL is mature and supposedly security-critical. It has a highly exposed codebase. It has a single design-purpose. What do you think the vulnerability rate is like for the average piece of software? Our high estimates don’t look high after all.

3: Informal Methods

Some vulnerabilities are design weaknesses or logic errors. Not buffer overruns. Not input validation problems. Not statistical errors. Not side-channel attacks. They are precedence bugs, off-by-one bugs, or errant state machine misbehaviour. Not the kind of flaw that static analysis is good at spotting. Not the kind of flaw you can easily patch away.

4: Unintended Consequences

I’ve said nothing about deployment practices, configuration, or administration. Even if you aren’t a developer, there’s no shortage of ways you can expose or create vulnerability as a consequence of individual configuration choices. Software has many layers and many settings. Unintended consequences arise from interactions between these settings and layers. Defaults are frequently a poor choice. Even documentation is sometimes misleading in a way that jeopardises security.

As code breadth, depth, diversity, and distributedness increases, so does scope for errors and vulnerability. These four problems are the reason why software is insecure.

Good News Bad News, Luck & Judgement

The news isn’t all bad. Some part of the odds falls in our favour.

  • Not all vulnerabilities are exposed by default.
  • Not all exposed vulnerabilities are exposed to unprivileged users.
  • Not all exposed unprivileged user vulnerabilities end in system compromise.

Experience tells us we can’t draw comfort from this. There are many winning systems but luck is not one of them.

  • What was hidden from exposure can be inadvertently revealed.
  • What was inaccessible to unprivileged users can slip within their reach.
  • What looked like a tiny hole, turns out to be a secret passage into the castle.

What about judgement? The Software Development Life Cycle (SDLC) improves software quality and with it security. However, it provides only a marginal reduction in total vulnerability once we consider the sum of a deepening stack, spiralling dependencies, and increasing distributedness and diversity of code within real applications. If software quality is order and flaws are disorder, disorder is still increasing at a faster rate than order can be imposed. This isn’t an argument against S-SDLC, DevSecOps, Formal Methods, or any other software quality initiative. It’s an observation that these things are not an adequate braking force. We continue to roll downhill towards greater overall vulnerability.

Consider “as a service” providers. They carry their own vulnerable software, which they may improve. They also carry the vulnerable “genes” of all the other components, products, and 3rd party services they build upon. They have no meaningful control over this inheritance and only partial influence over whether those genes are “expressed” in their particular formulation of an architecture or service. Soon software will be everywhere, inside everything. Much of it vulnerable.

The winning system is, therefore, the relentless limitation of attack surface. Whether it be by design, development, or deployment. Preferably all three. Don’t wait for software quality to improve. Count on it remaining lousy.

Reducing Attack Surface In Practice

This series is for practitioners, but it talks of systems. It does so because practices co-evolve with technology and technology becomes obsolete, invalidating practices with it. You can watch this happen in real-time. Professionals, products, and companies struggle to stay relevant when customers move from traditional physical infrastructure to cloud to serverless.

Winning systems are eternal. They transcend the tech-de-jour. The IT industry hasn’t grasped this. That’s why job descriptions are still packed with vendor and product names. It’s why product companies fail to talk a language that business understands. It’s why I still think a classical Computer Science and Software Engineering background helps. It separates pristine theory from messy, proprietary, and perishable practice.

So let’s start with the pristine theory first before getting dirty with practices:

  • For software which you must expose, expose only a small part.
  • For that small part:
    • Expose it only to a constrained group of users.
    • Expose only a necessary subset of its functionality.
  • For that subset:
    • Eliminate complexity, making it as simple as possible.
    • Apply the winning systems of least-privilege and default-deny.
    • If a given function can be achieved in a less risky way, take it.
    • Where complexity is unavoidable, treat that software with suspicion.
  • Remove unused or vestigial elements, they are a liability.

The practical extent to which one can limit attack surface is case-specific. It depends upon the level of control you have over the software in question and the environment within which it runs. However, everyone can do something to limit their attack surface, even if your control is limited to that of a network or systems administrator.

This is the part where we risk a little obsolescence by talking in more detail about practices.

Shedding Vulnerabilities
  • Deactivate non-essential modules/plugins. Apply universally from applications to kernels.
  • Remove unused functionality from code or Operating Systems.
  • Deactivate non-essential network services.
  • Better still, don’t install or activate them in the first place.
Restricting Vulnerabilities
  • Don’t expose networked applications to traffic which is:
    • Not yet authenticated.
    • Not yet validated in some way.
    • From non-paying users/agents (business model permitting).
Shielding Vulnerabilities
  • Where services are only used from within your network (or within a host), prevent them being accessed from other points of origin.
  • If services can be bound to individual IP addresses or interfaces, bind them.
  • Remove unused network protocol stacks.
Designing Out Vulnerability

Now the hard part. If you are designing or building software, you have a golden opportunity to avoid whole classes of vulnerability. Doing so may mean changing how your application or group of applications work together to provide the user with a capability. There will be a trade-off and that trade-off will be specific to your situation. Choose wisely.

  • Can you break one application into two or more based on functional necessity?
  • Can you use a cutout in your system design?
  • Can you reduce the number of routines acting on data over a network?
  • Can you substitute reading from the network with reading from files?
  • Can you perform strict validation of those files?
  • Can you break your processing into a producer/consumer model?
  • Can you swap a parser for something simpler with equivalent functionality?
  • Can you use ephemerality or transience, such that vulnerable routines or infrastructure are only exposed momentarily?
  • Can you sacrifice a little efficiency for a reduction in attack surface?
Encoding Suspicion

Everyone should be sanitising information on the way into their software, subroutine or process. Methods for doing this are well documented. Do you sanitise information on the way out again? For 30 years we’ve had Design-by-Contract and software contracts but they aren’t used by everyone. If you know that a given routine ought to produce only one of ‘n’ outputs then why not discard anything not in that set? Don’t assume sanitised inputs will produce sane and safe outputs. Expect the unexpected.

Conclusion

I have a confession to make. I short-changed you on the quote at the top of this post. The full quote is:

“Out of every one hundred men, ten shouldn’t even be there, eighty are just targets, nine are the real fighters, and we are lucky to have them, for they make the battle. Ah, but the one, one is a warrior, and he will bring the others back.”

Heraclitus 535 - 475 BC

You might be that one warrior. You might be fortunate enough to have a phalanx of nine “real fighters” on your team. You might manage to protect the ninety who “shouldn’t be there” or who are “just targets”. Wouldn’t it be better to prevent them from coming under attack in the first place? Spartans didn’t take prisoners. Neither do hackers, angry customers, the press, the regulator, or corporate lawyers. Leave the weak and the vulnerable at home. Reduce your attack surface.

What might a future look like where attack surface reduction was simply another part of compiler optimisation? See my next post.


  1. Woody & Mead. “Using Quality Metrics and Security Methods to Predict Software Assurance”. Software Engineering Institute, Carnegie Mellon University, June 20th, 2016. ↩︎

  2. Woody, Ellison, Nichols. “Predicting Software Assurance Using Quality and Reliability Measures”. CMU/SEI-2014-TN-02 Software Engineering Institute, Carnegie Mellon University, 2014. ↩︎

  3. Oracle Database 12.2. https://news.ycombinator.com/item?id=18442941 ↩︎

  4. OpenSSL CVEs are tracked from 1999. LibreSSL tracked from 2014. The former averages 10 vulnerabilities per year and 113 since 2014, the latter counts <2 per year and just 7 since 2014. ↩︎

Nick Hutton

Engineer, Investor, Founder, Product Manager

London, England https://blog.eutopian.io