Thursday, August 22, 2013

Getting Application Security Vulnerabilities Fixed

It’s a lot harder to fix application security vulnerabilities than it should be.

In their May 2013 security report, WhiteHat Security published some discouraging findings about how many application security vulnerabilities found in testing get fixed, and how long it takes to fix them. They found that only 61% of serious security vulnerabilities get fixed, and that on average, it takes 193 days to get them fixed.

Why some vulnerabilities get fixed, or don’t get fixed

Convincing management and the customers paying for software development work – and the developers that need to do the work – that security vulnerabilities really need to be fixed is one part of the problem. Proving that this can be done in a cost effective and safe way is another.

For most organizations, compliance – not operational risk, not customer requirements or other commercial considerations, and not concern for quality – determines whether security vulnerabilities get fixed. WhiteHat customers reported that the #1 reason that a vulnerability gets fixed is because it is required by compliance. And the #1 reason that people don’t fix a vulnerability is because it isn't required by compliance.

One of the other factors that influences how many security vulnerabilities get fixed and how fast they can be fixed is where the system is in its life. It’s a lot easier and much less expensive to fix security vulnerabilities found early in a project, before you’ve written a lot of code that needs to be reviewed, fixed and tested again; when the situation and the system are both plastic enough that you can make course corrections without a lot of time or cost.

Obviously it’s a much different story for legacy systems on life support, where nobody really understands the code or is confident that they can change it safely, and nobody is sure how long the system is going to be around (although these systems almost always hang on longer than anyone expects).

Everything in between is where decisions are difficult to make: the system is already in use and has been for a while, and the team maintaining and supporting it has a full book of committed work to deal with. It can be hard to make fixing security vulnerabilities a priority when things seem to be running fine, and everyone is busy trying to keep it this way, unless maybe compliance is standing over you holding a big enough hammer that you have to do something to show that you are taking them seriously.

What’s it going to cost?

If you can make the case that there are serious security problems that that need to be taken care of, where do you start? A security review could uncover hundreds or even thousands of vulnerabilities – the first time that you do a security scan or pen test of a big system can be overwhelming. How much work is it going to take to “make the system secure”, what is it going to cost?

Denim Group has done some interesting research on understanding how much work is involved in fixing security vulnerabilities.

Like any other bugs in code, some vulnerabilities are easier to find and fix than others. A XSS vulnerability can take anywhere between 10 minutes (stored XSS) to about an hour and a half (stored and reflected) to fix – and most web apps have at least one, often hundreds of these problems. Fixing a SQL Injection problem also takes an hour and a half on average. A missing authorization check? Only 7 minutes. And like any other bugs in code, the coding work is only a small part of the time taken to get the fix done (on average, 30% of the total time). Testing takes around half of the time, and the rest is in getting things setup for making the change, getting the new code built and deployed, and overhead.

Unlike a functional bug, the customer won’t see any immediate advantage in fixing a vulnerability – the code works fine right now as far as they can see. So it’s important that you can fix vulnerabilities without spending too much time or money doing it, and that you can do it without breaking whatever is already working.

This is why Nick Galbreath stresses the value of a Continuous Deployment capability as a pre-requisite to a successful software security program, leveraging Continuous Integration and Continuous Delivery tools and practices so that when developers check in code changes the system is automatically built, tested and it can be automatically deployed if all of the tests pass. It’s not about pushing every code change out immediately – it’s about having a proven pipeline in place for rolling changes out to production quickly and with minimal risk, knowing that you can make fixes and get them out cheaply and with confidence, and that you are able to respond to an emergency if you have to. This will pay dividends outside of security work, reducing the cost and risk of making any software change.

Getting security bugs fixed

Denim Group explains that remediating software security vulnerabilities has to be managed like any other software development project, and they provide some guidelines on how to do it in a waterfally kind of way, with upfront stakeholder engagement and planning: an approach that can work fine for many organizations, especially larger ones.

A more iterative, Agile approach could start with a short, time-boxed spike. Take a couple of smart developers and give them a couple of weeks to review the list of vulnerabilities (if possible with whoever found them), understand which ones are serious and filter out false positives, and pick some vulnerabilities to fix (a few each of different kinds). They should choose which vulnerabilities to work on by trading off what is easy to understand and fix, against the risk of not fixing them. Tools like the OWASP Top 10 or SANS/CWE Top 25 can help with understanding the issues and making these decisions.

SQL Injections would make a good first choice: a serious vulnerability that is easy to exploit and that can have serious consequences, but also easy for a developer to understand and fix. Or a missed authorization check: another potentially serious bug that should be easy to understand, trivial to fix and test. A problem like a mistake in secure password storage might be technically harder to solve, but still easy to isolate and test. Adding server-side validation (instead of validation only at the client) is another easy and good place to start.

It is important that the developers take the time to understand what they are doing and how to do it right: that they understand the vulnerability, why it is a problem, how to fix it correctly, and how to test it (test it to make sure that they actually closed the security hole, and regression test it to make sure that they didn't break anything else by accident). The important thing here isn't to make a few fixes – it’s to learn what’s involved in correctly solving these problems, and know that you can build and deploy the fixed code properly.

Then run another spike. Pick a few other bugs, maybe some that are harder to understand and fix, and some that are easy to fix but less serious (like missing error handling or information leaks), and run through the same steps again.

With a small investment of time like this, you should get an understanding of what work needs to be done, how to do it, what it’s going to cost, and you should also have the confidence that you can do it safely. You should have good enough information to estimate the amount of work that it will take to fix the remaining problems; and a good enough understanding of risk and cost trade-offs that need to be made, what problems need to be fixed – and can be fixed – sooner or later.

Now you can add the remaining bugs that you plan to fix to your backlog. You might decide to fix as many of them as you can all at once in a hardening sprint, or prioritize and fix them with the other work in your backlog.

You can’t fix, or effectively plan to fix, security vulnerabilities until you understand them. Once you understand the problems (what the bugs are, what needs to be fixed and why), and how much it is going to cost to fix them, and once you have the confidence that you can fix them properly, you can treat security vulnerabilities like other bugs – decide what needs to be fixed and when by trading off cost and risk with the other work that you have to get done. Remediation work becomes just another software development problem to be managed, something that developers and managers already know how to do.

No comments:

Site Meter