Soft2Secure

The Evolution of Software Security Assurance. Part 1

The Evolution of Software Security Assurance. Part 1
Jacob West

Jacob West, Director of Security Research at Fortify Software

Hi, my name is Jacob West. I’m the Director of Security Research at Fortify Software, recently acquired by Hewlett Packard. Today I want to talk about the evolution of software security assurance, some of the origins of the practices that we take for granted today, and where I think some of those practices are going to go in the future.

First, a little bit about myself. I got into the security field by working on Static Analysis Tool at UC Berkeley, called MOPS. From there I went on to work at Fortify for the last seven years, on building commercial-grade static analysis tools. And a few years ago I was lucky enough to publish a book called “Secure Programming with Static Analysis” – I co-authored that with Brian Chess, one of our founders at Fortify. And that’s all about how static analysis is applicable to the security problem, and about the kinds of problems that static analysis is good at finding.

Table of Contents

Background

So, to talk about the beginning of software security and software security assurance, we really have to go back to the beginning of the software industry. And the software industry was born in the late 1960s – early 1970s. And the main driver for the creation of an independent software industry was IBM decoupling software and services around the computer systems that they sold from the hardware itself. The decoupling of these allowed consumers to start to think about security as a composite problem: they’re buying hardware from one vendor, software systems and services and various solutions from other vendors, and then they begin to need to put all these together into a system that meets their business needs. And this created a security problem for the consumer because now not all systems were created equal. You weren’t buying the x from IBM; you were buying a piece from here and a piece from there.

So, this created really a challenge that we call as a field – software security. On one side, we have security professionals – people who used to configure the firewalls and perhaps try to attack systems that were deployed locally. And on the other side, you have people that are responsible for building software systems – developers, QA engineers, and so on. And software security is really bringing these two fields together: people who know about security threats, attacks, countermeasures; and people who know about software systems – how to build them, how to test them.

Key Disciplines

Today I’m going to talk about three key disciplines that contribute to software security assurance: threat modeling, code review, and penetration testing. And I’ll talk a little bit about each of these, and then I’ll talk about how we bring them all together from a compliance and risk management standpoint.

Threat Modeling

So, threat modeling is all about the kinds of problems that someone designing or building a system from the foundation up cares about. It’s about understanding what things could go wrong with the system once it’s eventually build, and designing the system, and eventually implementing the system to avoid those eventual threats. The origins of threat modeling, really, come from a couple of key papers related to secure design principles. We had Saltzer and Schroeder publish a really seminal work on design principles in 1974; from it we have principles like the principle of least privilege and others that carry on as kind of a foundation for computer security today. Morrie Gasser published quite a bit on designing complete security system so that every aspect – the software, the hardware and the eventual configuration – all tie together to provide a secure solution.

Threat modeling really encompasses a variety of other activities as well: things like architectural risk assessment, the development of abuse cases; but I’m going to talk about it generally as an all-encompassing field that’s asking the question: “What could go wrong with our system once we eventually build it and deploy it?” In order to talk effectively about threat modeling, we have to understand what we mean by a threat, and different people have different definitions of this. But I like one that Matt Bishop published in his book “Computer Security”. In the book Matt says: “A threat is a potential violation of security” – notice potential violation of security. “Actions that could cause the violation to occur are called attacks. Those who execute such actions or cause them to be executed are called attackers.”

And it’s really interesting to think about the idea that a threat is this collection of things: a potential vulnerability – so, a potential weakness in a computer system which could be exploited; an attacker – someone who is interested in exploiting that system, has some motivation to compromise its security; and then the attack itself – some interaction with the software system that the attacker provides in order to elicit some desirable response, in order to exploit the potential vulnerability.

The basic steps of threat modeling activity, or threat modeling process, are as follows: first, the person or team performing the threat modeling needs to understand what the system is supposed to do, how it is designed, what the major components are, where certain functionality is implemented, what the underlying technologies are, what the applications of those may have on security – really, the big picture of the system from a security and a technological standpoint.

Next, the person or the team needs to enumerate scenarios in which the security of that system might fail. These are basically the hypotheticals, the “what if there was an attacker and they mounted a certain kind of attack, and the system was vulnerable in a certain way?” These aren’t necessarily vulnerabilities that we know to be true, we know to be present – but instead, they are hypotheticals, the things that could go wrong. We may include things that are very-very likely to occur, we may include things that are very-very far-fetched, very unlikely to occur.

Once we create this list, and we try to make it as complete as possible, the next step is to go through and prioritize the elements on the list, decide which threats, which potential failures are most important and introduce the most risk to the business.

Once we’ve prioritized them, then we can begin from most important to least important to associate remediation, or responses, with those threats, with those potential security failures. And what we write down may be that we’ve changed the design of the system to mitigate this threat, and therefore we don’t think it will be real anymore; or it may be that we write down the justification – why the threat is maybe not serious enough that we need to respond to it, we think the maximum impact would be so low that this isn’t something we have to respond to; or it may be a pointer forward into a future development cycle, saying “Okay, for this release this is acceptable, but in the future we think this risk would be too great and we need to address it in the next major release or the next patch.”

I think one of the most valuable aspects of threat modeling is the way it can inform other activities in the development lifecycle, in particular the two that I’m talking about today: code review and penetration testing can benefit a great deal from threat modeling. In the case of code review, during your threat modeling activity you can identify the attack surface for the application and you can identify code that may be at risk or is the target of one of the threats that you have enumerated. Once you’ve done this, then you can prioritize your code review activities and prioritize the static analysis results that you decide to review and remediate based on that knowledge of the threat model for the application.

From penetration testing standpoint – because penetration testing is all about emulating what a hacker would do, what an attacker would do to the system – the threat model is really a scorecard to measure the effectiveness of your penetration test. It says “Have I exercised attacks in my penetration test that enumerate all the possible threats I think the system might experience once it’s deployed?” And if you got shortcomings there, then you can expand your repertoire of penetration testing attacks using the threat model that you’ve already developed.

Also Read:

The Evolution of Software Security Assurance. Part 2.

The Evolution of Software Security Assurance. Part 3.

No ratings yet.

Please rate this

Posted in: News

Leave a Comment (0) ↓