Thursday, July 17, 2014

Trust instead of Threats

According to Dr. Gary McGraw’s ground breaking work on software security, up to half of security mistakes are made in design rather than in coding. So it’s critical to prevent – or at least try to find and fix – security problems in design.

For the last 10 years we’ve been told that we are supposed to do this through threat modeling aka architectural risk analysis – a structured review of the design or architecture of a system from a threat perspective to identify security weaknesses and come up with ways to resolve them.

But outside of a few organizations like Microsoft threat modeling isn’t being done at all, or at best only on an inconsistent basis.

Cigital’s work on the Build Security In Maturity Model (BSIMM), which looks in detail at application security programs in different organizations, has found that threat modeling doesn't scale. Threat modeling is still too heavyweight, too expensive, too waterfally, and requires special knowledge and skills.

The SANS Institute’s latest survey on application security practices and tools asked organizations to rank the application security tools and practices they used the most and found most effective. Threat modeling was second last.

And at the 2014 RSA Conference, Jim Routh at Aetna, who has implemented large-scale secure development programs in 4 different major organizations, admitted that he has not yet succeeded in injecting threat modeling into design anywhere “because designers don’t understand how to make the necessary tradeoff decisions”.

Most developers don’t know what threat modeling is, or how do to it, never mind practice it on a regular basis. With the push to accelerate software delivery, from Agile to One-Piece Continuous Flow and Continuous Deployment to production in Devops, the opportunities to inject threat modeling into software development are disappearing.

What else can we do to include security in application design?

If threat modeling isn’t working, what else can we try?

There are much better ways to deal with security than threat modelling... like not being a tool.
JeffCurless, comment on a blog post about threat modeling

Security people think in terms of threats and risks – at least the good ones do. They are good at exploring negative scenarios and what-ifs, discovering and assessing risks.

Developers don’t think this way. For most of them, walking through possibilities, things that will probably never happen, is a waste of time. They have problems that need to be solved, requirements to understand, features to deliver. They think like engineers, and sometimes they can think like customers, but not like hackers or attackers.

In his new book on Threat Modeling Adam Shostack says that telling developers to “think like an attacker” is like telling someone to think like a professional chef. Most people know something about cooking, but cooking at home and being a professional chef are very different things. The only way to know what it’s like to be a chef and to think like a chef is to work for some time as a chef. Talking to a chef or reading a book about being a chef or sitting in meetings with a chef won’t cut it.

Developrs aren’t good at thinking like attackers, but they constantly make assertions in design, including important assertions about dependencies and trust. This is where security should be injected into design.

Trust instead of Threats

Threats don’t seem real when you are designing a system, and they are hard to quantify, even if you are an expert. But trust assertions and dependencies are real and clear and concrete. Easy to see, easy to understand, easy to verify. You can read the code, or write some tests, or add a run-time check.

Reviewing a design this way starts off the same as a threat modeling exercise, but it is much simpler and less expensive. Look at the design at a system or subsystem-level. Draw trust boundaries between systems or subsystems or layers in the architecture, to see what’s inside and what’s outside of your code, your network, your datacenter:

Trust boundaries are like software firewalls in the system. Data inside a trust boundary is assumed to be valid, commands inside the trust boundary are assumed to have been authorized, users are assumed to be authenticated. Make sure that these assumptions are valid. And make sure to review dependencies on outside code. A lot of security vulnerabilities occur at the boundaries with other systems, or with outside libraries because of misunderstandings or assumptions in contracts.
OWASP Application Threat Modeling

Then, instead of walking through STRIDE or CAPEC or attack trees or some other way of enumerating threats and risks, ask some simple questions about trust:

Are the trust boundaries actually where you think they are, or think they should be?

Can you trust the system or subsystem or service on the other side of the boundary? How can you be sure? Do you know how it works, what controls and limits it enforces? Have you reviewed the code? Is there a well-defined API contract or protocol? Do you have tests that validate the interface semantics and syntax?

What data is being passed to your code? Can you trust this data – has it been validated and safely encoded, or do you need to take care of this in your code? Could the data have been tampered with or altered by someone else or some other system along the way?

Can you trust the code on the other side to protect the integrity and confidentiality of data that you pass to it? How can you be sure? Should you enforce this through a hash or an HMAC or a digital signature or by encrypting the data?

Can you trust the user’s identity? Have they been properly authenticated? Is the session protected?

What happens if an exception or error occurs, or if a remote call hangs or times out – could you lose data or data integrity, or leak data, does the code fail open or fail closed?

Are you relying on protections in the run-time infrastructure or application framework or language to enforce any of your assertions? Are you sure that you are using these functions correctly?

These are all simple, easy-to-answer questions about fundamental security controls: authentication, access control, auditing, encryption and hashing, and especially input data validation and input trust, which Michael Howard at Microsoft has found to be the cause of half of all security bugs.

Secure Design that can actually be done

Looking at dependencies and trust will find – and prevent – important problems in application design.

Developers don’t need to learn security jargon, try to come up with attacker personas or build catalogs of known attacks and risk weighting matrices, or figure out how to use threat modeling tools or know what a cyber kill chain is or understand the relative advantages of asset-centric threat modeling over attacker-centric modeling or software-centric modeling.

They don’t need to build separate models or hold separate formal review meetings. Just look at the existing design, and ask some questions about trust and dependencies. This can be done by developers and architects in-phase as they are working out the design or changes to the design – when it is easiest and cheapest to fix mistakes and oversights.

And like threat modeling, questioning trust doesn’t need to be done all of the time. It’s important when you are in the early stages of defining the architecture or when making a major design change, especially a change that makes the application’s attack surface much bigger (like introducing a new API or transitioning part of the system to the Cloud). Any time that you are doing a “first of”, including working on a part of the system for the first time. The rest of the time, the risks of getting trust assumptions wrong should be much lower.

Just focusing on trust won’t be enough if you are building a proprietary secure protocol. And it won’t be enough for high-risk security features – although you should be trying to leverage the security capabilities of your application framework or a special-purpose security library to do this anyways. There are still cases where threat modeling should be done – and code reviews and pen testing too. But for most application design, making sure that you aren’t misplacing trust should be enough to catch important security problems before it is too late.

Wednesday, July 9, 2014

10 things you can do to as a developer to make your app secure: #10 Design Security In

There’s more to secure design and architecture besides properly implementing Authentication, Access Control and Logging strategies, and choosing (and properly using) a good framework.

You need to consider and deal with security threats and risks at many different points in your design.

Adam Shostack’s new book on Threat Modeling explores how to do this in detail, with lots of exercises and examples on how to look for and plug security holes in software design, and how to think about design risks.

But some important basic ideas in secure design will take you far:

Know your Tools

When deciding on the language(s) and technology stack for the system, make sure that you understand the security constraints and risks that your choices will dictate. If you’re using a new language, take time to learn about how to write code properly and safely in that language. If you’re programming in Java, C or C++ or Perl check out CERT’s secure coding guidelines for those languages. If you're writing code on iOS, read Apple's Secure Coding Guide. For .NET, review OWASP's .NET Security project.

Look for static analysis tools like Findbugs and PMD for Java, JSHint for Javascript, OCLint for C/C++ and Objective-C, Brakeman for Ruby, RIPS for PHP, Microsoft's static analysis tools for .NET, or commercial tools that will help catch common security bugs and logic bugs in coding or Continuous Integration.

And make sure that you (or ops) understand how to lock down or harden the O/S and to safely configure your container and database (or NoSQL data) manager.

Tiering and Trust

Tiering or layering, and trust in design are closely tied together. You must understand and verify trust assumptions at the boundaries of each layer in the architecture and between systems and between components in design, in order to decide what security controls need to be enforced at these boundaries: authentication, access control, data validation and encoding, encryption, logging.

Understand when data or control crosses a trust boundary: to/from code that is outside of your direct control. This could be an outside system, or a browser or mobile client or other type of client, or another layer of the architecture or another component or service.

Thinking about trust is much simpler and more concrete than thinking about threats. And easier to test and verify. Just ask some simple questions:

Where is the data coming from? How can you be sure? Can you trust this data – has it been validated and safely encoded? Can you trust the code on the other side to protect the integrity and confidentiality of data that you pass to it? Do you know what happens if an exception or error occurs – could you lose data or data integrity, or leak data, does the code fail open or fail closed?

Before you make changes to the design, make sure that you understand these assumptions and make sure that the assumptions are correct.

The Application Attack Surface

Finally, it’s important to understand and manage the system’s Attack Surface: all of the ways that attackers can get in, or get data out, of the system, all of the resources that are exposed to attackers. APIs, files, sockets, forms, fields, URLs, parameters, cookies. And the security plumbing that protects these parts of the system.

Your goal should be to try to keep the Attack Surface as small as possible. But this is much easier said than done: each new feature and new integration point expands the Attack Surface. Try to understand the risks that you are introducing, and how serious they are. Are you creating a brand new network-facing API or designing a new business workflow that deals with money or confidential data, or changing your access control model, or swapping out an important part of your platform architecture? Or are you just adding yet another CRUD admin form, or just one more field to an existing form or file. In each case you are changing the Attack Surface, but the risks will be much different, and so will the way that you need to manage these risks.

For small, well-understood changes the risks are usually negligible – just keep coding. If the risks are high enough you’ll need to do some abuse case analysis or threat modeling, or make time for a code review or pen testing.

And of course, once a feature or option or interface is no longer needed, remove it and delete the code. This will reduce the system’s Attack Surface, as well as simplifying your maintenance and testing work.

That’s it. We’re done.

The 10 things you can do as a developer to make your app secure: from thinking about security in architectural layering and technology choices, including security in requirements, taking advantage of other people’s code by using frameworks and libraries carefully, making sure that you implement basic security controls and features like Authentication and Access Control properly, protecting data privacy, logging with security in mind, and dealing with input data and stopping injection attacks, especially SQL injection.

This isn’t an exhaustive list. But understanding and dealing with these important issues in application security – including security when you think about requirements and design and coding and testing, knowing more about your tools and using them properly – is work that all developers can do, and will take you a long, long way towards making your system secure and reliable.

Monday, July 7, 2014

10 things you can do as a developer to make your app secure: #9 Start with Requirements

To build a secure system, you should start thinking about security from the beginning.

Legal and Compliance Constraints

First, make sure that everyone on the team understands the legal and compliance requirements and constraints for the system.

Regulations will drive many of the security controls in your system, including authentication, access control, data confidentiality and integrity (and encryption), and auditing, as well as system availability and reliability.

Agile teams in particular should not depend only on their Product Owner to understand and communicate these requirements. Compliance restrictions can impose important design constraints which may not be clear from a business perspective, as well as assurance requirements that dictate how you need to build and test and deliver the system, and what evidence you need to show that you have done a responsible job.

As developers you should try to understand what all of this means to you as early as possible. As a place to start, Microsoft has a useful and simple guide (Regulatory Compliance Demystified: An Introduction to Compliance for Developers) that explains common business regulations including SOX, HIPAA, and PCI-DSS and what they mean to developers.

Tracking Confidential Data

The fundamental concern in most regulatory frameworks is controlling and protecting data.

Make sure that everyone understands what data is private/confidential/sensitive and therefore needs to be protected. Identify and track this data throughout the system. Who owns the data? What is the chain of custody? Where is the data coming from? Can the source be trusted? Where is the data going to? Can you trust the destination to protect the data? Where is the data stored or displayed? Does it have to be stored or displayed? Who is authorized to create it, see it, change it, and do these actions need to be tracked and reviewed?

The answers to these questions will drive requirements for data validation, data integrity, access control, encryption, and auditing and logging controls in the system.

Application Security Controls

Think through the basic functional application security controls: Authentication, Access Control, Auditing – all of which we’ve covered earlier in this series of posts. Where do these controls need to be added? What security stories need to be written? How will these controls be tested?

Business Logic Abuse Can be Abused

Security also needs to be considered in business logic, especially multi-step application workflows dealing with money or other valuable items, or that handle private or sensitive information, or command and control functions. Features like online shopping carts, online banking account transactions, user password recovery, bidding in online auctions, or online trading and root admin functions are all potential targets for attack.

The user stories or use cases for these features should include exceptions and failure scenarios (what happens if a step or check fails or times out, or if the user tries to cancel or repeat or bypass a step?) and requirements derived from "abuse cases" or “misuse cases”. Abuse cases explore how the application's checks and controls could be subverted by attackers or how the functions could be gamed, looking for common business logic errors including time of check/time of use and other race conditions and timing issues, insufficient entropy in keys or addresses, information leaks, failure to prevent brute forcing, failing to enforce workflow sequencing and approvals, and basic mistakes in input data validation and error/exception handling and limits checking.

This isn’t defence-against-the-dark-arts black hat magic, but getting this stuff wrong can be extremely damaging. For some interesting examples of how bad guys can exploit small and often just plain stupid mistakes in application logic, read Jeremiah Grossman’s classic paper “Seven Business Logic Flaws that put your Website at Risk”.

Make time to walk through important abuse cases when you’re writing up stories or functional requirements, and make sure to review this code carefully and include extra manual testing (especially exploratory testing) as well as pen testing of these features to catch serious business logic problems.

We’re close to the finish line. The final post in this series is coming up: Design and Architect Security In.

Wednesday, July 2, 2014

10 things you can do as a developer to make your app secure: #8 Leverage other people's Code (Carefully)

As you can see from the previous posts, building a secure application takes a lot of work.

One short cut to secure software can be to take advantage of the security features of your application framework. Frameworks like .NET and Rails and Play and Django and Yii provide lots of built-in security protection if you use them properly. Look to resources like OWASP’s .NET Project and .NET Security Cheat Sheet, the Ruby on Rails Security Guide, the Play framework Security Guide, Django’s security documentation, or How to write secure Yii applications, Apple’s Secure Coding Guide or the Android security guide for developers for framework-specific security best practices and guidelines.

There will probably be holes in what your framework provides, which you can fill in using security libraries like Apache Shiro, or Spring Security, or OWASP’s comprehensive (and heavyweight) ESAPI, and special purpose libraries like Jasypt or Google KeyCzar and the Legion of the Bouncy Castle for crypto, and encoding libraries for XSS protection and protection from other kinds of injection.

Keep frameworks and libraries up to date

If you are going to use somebody else’s code, you also have to make sure to keep it up to date. Over the past year or so, high-profile problems including a rash of serious vulnerabilities in Rails in 2013 and the recent OpenSSL Heartbleed bug have made it clear how important it is to know all of the Open Source frameworks and libraries that you use in your application (including in the run-time stack), and to make sure that this code does not have any known serious vulnerabilities.

We’ve known for a while that popular Open Source software components are also popular (and easy) attack targets for bad guys. And we’re making it much too easy for the bad guys.

A 2012 study by Aspect Security and Sonatype looked at 113 million downloads of the most popular Java frameworks (including Spring, Apache CXF, Hibernate, Apache Commons, Struts and Struts2, GWT, JSF, Tapestry and Velocity) and security libraries (including Apache Shiro, Jasypt, ESAPI, BouncyCastle and AntiSamy). They found that 37% of this software contained known vulnerabilities, and that people continued to download obsolete versions of software with well-known vulnerabilities more than ¼ of the time.

This has become a common enough and serious enough problem that using software frameworks and other components with known vulnerabilities is now in the OWASP Top 10 Risk list.

Find Code with Known Vulnerabilities and Patch It - Easy, Right?

You can use a tool like OWASP’s free Dependency Check or commercial tools like Sonatype CLM to keep track of Open Source components in your repositories and to identify code that contains known vulnerabilities.

Once you find the problems, you have to fix them - and fix them fast. Research by White Hat Security shows that serious security vulnerabilities in most Java apps take an average of 91 days to fix once a vulnerability is found. That’s leaving the door wide open for way too long, almost guaranteeing that bad guys will find their way in. If you don't take responsibility for this code, you can end up making your app less secure instead of more secure.

Next: let’s go back to the beginning, and look at security in requirements.

Monday, June 30, 2014

10 things you can do as a developer to make your app secure: #7 Logging and Intrusion Detection

This is part 7 of a series of posts on the OWASP Top 10 Proactive Development Controls: 10 things you can do as a developer to make your app secure.

Logging and Intrusion Detection

Logging is a straightforward part of any system. But logging is important for more than troubleshooting and debugging. It is also critical for activity auditing, intrusion detection (telling ops when the system is being hacked) and forensics (figuring out what happened after the system was hacked). You should take all of this into account in your logging strategy.

What to log, what to log… and what not to log

Make sure that you are always logging when, who, where and what: timestamps (you will need to take care of syncing timestamps across systems and devices or be prepared to account for differences in time zones, accuracy and resolution), user id, source IP and other address information, and event details.

To make correlation and analysis easier, follow a common logging approach throughout the application and across systems where possible. Use an extensible logging framework like SLF4J with Logback or Apache Log4j/Log4j2.

Compliance regulations like PCI DSS may dictate what information you need to record and when and how, who gets access to the logs, and how long you need to keep this information. You may also need to prove that audit logs and other security logs are complete and have not been tampered with (using a HMAC for example), and ensure that these logs are always archived. For these reasons, it may be better to separate out operations and debugging logs from transaction audit trails and security event logs.

There is data that you must log (complete sequential history of specific events to meet compliance or legal requirements). Data that you must not log (PII or credit card data or opt-out/do-not-track data or intercepted communications). And other data you should not log (authentication information and other personal data).

And watch out for Log Forging attacks where bad guys inject delimiters like extra CRLF sequences into text fields which they know will be logged in order to try to cover their tracks, or inject Javascript into data which will trigger an XSS attack when the log entry is displayed in a browser-based log viewer. Like other injection attacks, protect the system by encoding user data before writing it to the log.

Review code for correct logging practices and test the logging code to make sure that it works. OWASP’s Logging Cheat Sheet provides more guidelines on how to do logging right, and what to watch out for.

AppSensor – Intrusion Detection

Another OWASP project, the OWASP AppSensor explains how to build on application logging to implement application-level intrusion detection. AppSensor outlines common detection points in an application, places that you should add checks to alert you that your system is being attacked. For example, if a server-side edit catches bad data that should already have been edited at the client, or catches a change to a non-editable field, then you either have some kind of coding bug or (more likely) somebody has bypassed client-side validation and is attacking your app. Don’t just log this case and return an error: throw an alert or take some kind of action to protect the system from being attacked like disconnecting the session.

You could also check for known attack signatures:.Nick Galbreath (formerly at Etsy and now at startup Signal Sciences) has done some innovative work on detecting SQL injection and HTML injection attacks by mining logs to find common fingerprints and feeding this back into filters to detect when attacks are in progress and potentially block them.

In the next 3 posts, we’ll step back from specific problems, and look at the larger issues of secure architecture, design and requirements.

Monday, June 23, 2014

10 things you can do as a developer to make your app secure: #6 Protect Data and Privacy

This is part 6 of a series of posts on the OWASP Top 10 Proactive Development Controls.

Regulations – and good business practices – demand that you protect private and confidential customer and employee information such as PII and financial data, as well as critical information about the system itself: system configuration data and especially secrets. Exposing sensitive information is a serious and common problem, 6th place on the OWASP Top 10 risk list.

Protecting data and privacy is mostly about encryption: encrypting data in transit, at rest, and during processing.

Encrypting Data in Transit – Using TLS properly

For web apps and mobile apps, encrypting data in transit means using SSL/TLS.

Using SSL isn't hard. Making sure that it is setup and used correctly takes more work. OWASP’s Transport Layer Protection Cheat Sheet explains how SSL and TLS work and rules that you should follow when using them.

Ivan Ristic at Qualys SSL Labs provides a lot of useful information and free tools to explain how to setup SSL correctly and to test the strength of your website’s SSL setup.

And OWASP has another Cheat Sheet on Certificate Pinning that focuses on how you can prevent man-in-the-middle attacks when using SSL/TLS.

Encrypting Data at Rest

The first rule of crypto is: Never try to write your own encryption algorithm. The other rules and guidelines for encrypting data correctly are explained in another Cheat Sheet from OWASP which covers the different crypto algorithms that you should use and when, and the steps that you need to follow to use them.

Even if you use a standard crypto algorithm, properly setting up and managing keys and other steps can still be hard to do right. Libraries like Google KeyCzar or Jasypt will take care of these details for you.

Extra care needs to be taken with safely storing (salting and hashing) passwords – something that I already touched on an earlier post in this series on Authentication. The OWASP Password Storage Cheat Sheet walks you through how to do this.

Implement Protection in Process

The last problem to look out for is exposing sensitive data during processing. Be careful not to store this data unencrypted in temporary files and don’t include it in logs. You may even have to watch out when storing it in memory.

In the next post we'll go through security and logging.

Wednesday, June 18, 2014

10 things you can do as a developer to make your app secure: #5 Authentication Controls

This is part #5 of a series of posts on the OWASP Top 10 Proactive Development Controls:

In the previous post, we covered how important it is to think about Access Control and Authorization early in design, and common mistakes to avoid in implementing access controls. Now, on to Authentication, which is about proving the identity of a user, and then managing and protecting the user’s identity as they use the system.

Don't try to build this all yourself

Building your own bullet-proof authentication and session management scheme is not easy. There are lots of places to make mistakes, which is why “Broken Authentication and Session Management” is #2 on the OWASP Top 10 list of serious application security problems. If your application framework doesn't take care of this properly for you, then look at a library like Apache Shiro or OWASP’s ESAPI which provide functions for authentication and secure session management.

Users and Passwords

One of the most common authentication methods is to request the user to enter an id and password. Like other fundamentals in application security this is simple in concept, but there are still places where you can make mistakes which the bad guys can and will take advantage of.

First, carefully consider rules for user-ids and passwords, including password length and complexity.

If you are using an email address as the user–id, be careful to keep this safe: bad guys may be interested in harvesting email addresses for other purposes.

OWASP’s ESAPI has methods to generateStrongPassword and verifyPaqsswordStrength which appliy a set of checks that you could use to come up with secure passwords. Commonly-followed rules for password complexity are proving to be less useful in protecting a user’s identity than letting users take advantage of a password manager or entering long password strings. A long password phrase or random data from a password manager is much harder to crack than something like

Password1!
which is likely to pass most application password strength checks.

Password recovery is another important function that you need to be careful with – otherwise attackers can easily and quickly steal user information and hijack a user’s identity. OWASP’s Forgot Password Cheat Sheet can help you design a secure recovery function, covering things like choosing and using good User Security Questions, properly verifying the answer to these questions, and using a side channel to send a reset password token.

Storing passwords securely is another critically important step. It’s not enough just to salt and hash passwords any more. OWASP’s Password Storage Cheat Sheet explains what you need to do and what algorithms to use.

Session Management

Once you establish a user’s identity, you need to associate it with a unique session id to all of the actions that a user performs after they log-in. The user’s session id has to be carefully managed and protected: attackers can impersonate the user by guessing or stealing the session id. OWASP’s Session Management Cheat Sheet explains how to properly set up a session, how to manage the different stages in a session id life cycle, and common attacks and defences on session management.

For more information on Authentication and how to do it right, go to OWASP's Authentication Cheat Sheet.

We’re half way through the list. On to #6: Protecting Data and Privacy.

Site Meter