Tuesday, July 19, 2016

Why you Should Attack Your Systems - Before "They" Do

You can't hack and patch your way to a secure system.

You will never be able to find all of the security vulnerabilities and weaknesses in your code and network through scanning, or by paying outsiders to try to hack their way in.

The only way to be secure is to design and build security in from the beginning:

  1. threat modeling and risk assessment when designing apps and networks
  2. understanding and using the security features of your languages and frameworks, and filling in any gaps with secure libraries like Apache Shiro or KeyCzar…
  3. hardening the run-time using guidelines like the CIS benchmarks and tools like Chef and Puppet and UpGuard
  4. carefully reviewing every change that you make to code and configuration before putting them into production
  5. training everybody involved so that they know what to do, and what not to do
This is hard work, and it is unavoidable.

So what's the point of penetration testing? Why do organizations like Intuit and Microsoft have Red Teams attacking their production systems? And why are Facebook and Google and even the US Department of Defense running bug bounty programs, paying outsiders to hack into their system and report bugs?

Because once you've done everything you know how to do - or everything that you think you need to do - to secure your system, the only way to find out whether you've done a good enough job, is to attack your systems - before the bad guys do.

Attacking your system can show you where you are strong, and where you are weak: what you missed, where you made mistakes. It will uncover misunderstandings and hilight gaps in your design, in your defensive controls, and in your logging and monitoring. Watching your system under attack, watching what attackers do and how they do it, understanding what to look for and why, how to identify attacks and how to respond to them, will help change the way that you think and the way that you design and code and set up and run systems.

Let's look at different ways of attacking your system, and what you can learn from them:

Pen Testing

Pen testing - hiring an ethical hacker to scan and explore your application or network to find vulnerabilities and see what they can do with them - is usually done as part of due diligence, before a new system or a major change is rolled out, or once a year to satisfy some kind of regulatory obligation.

Pen testers will scan and test for common vulnerabilities and common mistakes in network and system configuration, missing patches, unsafe default settings. They'll find mistakes in authentication and user set up logic, session management, and access control schemes. They'll look at logs and error messages to find information leaks and bugs in error handling, and they will test for mistakes in some business logic (at least for well-understood workflows like online shopping or online banking), trying to work around approval steps or limit checks.

Pen tests should act as a reality check. If they found problems, a bad guy could too - or already has.

Pen testers won't usually have enough time, or understand your system well enough, to find subtle mistakes, even if they have access to documentation and source code. But anything that they do find in a few days or a few weeks of testing should be taken seriously. These are real, actionable insights into weaknesses in your system – and weaknesses in how you built it. Why didn't you find these problems yourself? How did they get there in the first place? What do you need to change in order to prevent problems like this from happening again?

Some organizations will try to narrow the scope of the pen tests as much as possible, in order to increase their chance of getting a "passing grade" and move on. But this defeats the real point of pen testing. You've gone to the trouble and expense of hiring somebody smart to check your system security. You should take advantage of what they know to find as many problems as possible – and learn as much as you can from them. A good pen tester will explain what they found, how they found it, why it is serious, and what you need to do to fix it.

But pen testing is expensive and doesn't scale. It takes time to find a good pen tester, time to set up and run the test, and time to review and understand and triage the results before you can work on addressing them. In an Agile or DevOps world. where changes are being rolled out every few days or maybe several times a day, a pen test once or twice a year won't cut it.

Red Teaming

If you can afford to have your own pen testing skills in house, you can take another step closer to what it’s like dealing with real world attacks, by running Red Team exercises. Organizations like Microsoft, Intuit and Salesforce have standing Red Teams who continuously attack their systems – live, in production.

Red Teaming is based on military Capture the Flag exercises. The Red Team - a small group of attackers - try to break into the system (without breaking the system), while a Blue Team (developers and operations) tries to catch them and stop them.

The Blue Team may know that an attack is scheduled and what systems will be targeted, but they won't know the details of the attack scenarios. While the Red Team’s success is measured by how many serious problems they find, and how fast they can exploit them, the Blue Team will be measured by MTTD and MTTR: how fast they detected and identified the attack, and how quickly they stopped it or contained and recovered from it.

Like pen testers, the Red Team's job is to find important vulnerabilities, prove that they can be exploited, and help the Blue Team to understand how they found the vulnerabilities, why they are important, and how to fix them properly.

The point of Red Teaming isn't just to find bugs - although you will find good bugs this way, bugs that definitely need to be fixed. The real value of Red Teaming is that you can observe how your system and your Ops team behaves and responds under attack. To learn what an attack looks like, to train your team how to recognize and respond to attacks, and, by exercising regularly, to get better at this.

Over time, as the Blue Team gains experience and improves, as they learn to respond to - and prevent - attacks, the Red Team will be forced to work harder, to look deeper for problems, to be more subtle and creative. As this competition escalates, as both teams push each other, your system - and your security capability - will benefit.

Intuit, for example, runs Red Team exercises the first day of every week (they call this “Red Team Mondays”). The Red Team identifies target systems and builds up their attack plans throughout the week, and publishes their targets internally each Friday. The Blue Teams for those systems will often work over the weekend to prepare, and to find and fix vulnerabilities on their own, to make the Red Team’s job harder. After the Red Team Monday exercises are over, the teams get together to debrief, review the results, and build action plans. And then it starts again.

Bug Bounties

Bug Bounty programs take one more step closer to real world attacks, by enlisting outsiders to hack into your system.

Outside researchers and white hat hackers might not have the insight and familiarity with the system that your own Red Team will. But Bug Bounties will give you access to a large community of people with unique skills, creativity, and time and energy that you can't afford on your own. This is why even organizations like Facebook and Google, who already hire the best engineers available and run strong internal security programs, have had so much success with their Bug Bounty programs.

Like Red Teaming, the rewards and recognition given to researchers drives competition. And like Red Teaming, you need to carefully establish - and enforce - ground rules of conduct. What systems and functions can be attacked, and what can't be. How far testers are allowed to go, where they need to stop, and what evidence they need to provide in order to win their bounties.

You can try to set up and run your own program, following guidelines like the ones that Google has published or you can use a platform like BugCrowd (https://bugcrowd.com/) or HackerOne (https://hackerone.com/) to manage outside testers.

Automated Attacks

But you don't have to wait until outsiders - or even your own Red Team - attack your system to find security problems. Why not attack the system yourself, every day, or every time that you make a change?

Tools like Gauntlt and BDD-Security can be used to run automated security tests and checks on online applications in Continuous Integration or Continuous Delivery, every time that code is checked in and every time that the system configuration is changed.

Gauntlt (http://gauntlt.org/) is an open source testing framework that makes it easy to write security tests in a high-level, English-like language. Because it uses Cucumber under the covers, you can express tests in Gherkin's familiar Given {precondition} When {execute test steps} Then {results should/not be} syntax.

Gauntlt comes with attack adaptors that wrap the details of using security pen testing tools, and sample attack files for checking your SSL configuration using sslyze, testing for SQL injection vulnerabilities using sqlmap or checking the network configuration using nmap, running simple web app attacks using curl, scanning for common vulnerabilities using arachni and dirb and garmr, and checking for serious vulnerabilities like Heartbleed.

BDD-Security (https://github.com/continuumsecurity/bdd-security) is another open source security testing framework, also based on Cucumber. It includes SSL checking (again using sslyze), scanning for run-time vulnerabilities using Nessus, and it integrates nicely with Selenium, so that you can add automated tests for authentication and access control, and run web app scans using OWASP ZAP as part of your automated functional testing.

All of these tests can be plugged in to your CI/CD pipelines so that they run automatically, every time that you make a change, as a security smoke test.

You can take a similar approach to attack your network.

Startups such as

provide automated attack platforms which simulate how adversaries probe and penetrate your systems, and report on any weaknesses that they find.

You can automatically schedule and run pre-defined attacks and validation scenarios (or execute your own custom attacks) as often as you want, against all or parts of your network. These platforms scale easily, and provide you with an attacker's view into your systems and their weaknesses. You can see what attacks were tried, what worked, and why. You can use these tools for regular scanning and testing, to see if changes have left your systems vulnerable, to evaluate the effectiveness of a security defense tool, or, like Red Teaming, to exercise your incident response capabilities.

Running automated tests or attack simulations isn't the same as hiring a pen tester or running a Bug Bounty program or having a real Red Team. These tests have to be structured and limited in scope, so that they can be run often and provide consistent results.

But these tools can catch common and serious mistakes quickly - before anybody else does. They will give you confidence as you make changes. And they can be run continuously, so that you can maintain a secure baseline.

Why you need to Attack Yourself

There is a lot to be gained by attacking your systems. You'll find real and important bugs and mistakes - bugs that you know have to be fixed.

You can use the results to measure the effectiveness of your security programs, to see where you need to improve, and whether you are getting better.

And you will learn. You'll learn how to think like an attacker, and how your systems look from an attacker's perspective. You'll learn what to watch for, how to identify an attack, how to respond to attacks and how to contain them. You'll learn how long it takes to do this, and how to do it faster and easier.

You'll end up with a more secure system - and a stronger team.

Thursday, June 16, 2016

Dev-Sec.io Automated Hardening Framework

Automated configuration management tools like Ansible, Chef and Puppet are changing the way that organizations provision and manage their IT infrastructure. These tools allow engineers to programmatically define how systems are set up, and automatically install and configure software packages. System provisioning and configuration becomes testable, auditable, efficient, scalable and consistent, from tens to hundreds or thousands of hosts.

These tools also change the way that system hardening is done. Instead of following a checklist or a guidebook like one of the CIS Benchmarks, and manually applying or scripting changes, you can automatically enforce hardening policies or audit system configurations against recognized best practices, using pre-defined hardening rules programmed into code.

An excellent resource for automated hardening is a set of open source templates originally developed at Deutsche Telekom, under the project name "Hardening.io". The authors have recently had to rename this hardening framework to Dev-Sec.io

It includes Chef recipes and Puppet manifests for hardening base Linux, as well as for SSH, MySQL and PostgreSQL, Apache and Nginx. Ansible support at this time is limited to playbooks for base Linux and SSH. Dev-Sec.io works on Ubuntu, Debian, RHEL, CenOS and Oracle Linux distros.

For container security, the project team have just added an InSpec profile for Chef Compliance against the CIS Docker 1.11.0 benchmark.

Dev-Sec.io is comprehensive and at the same time accessible. And it’s open, actively maintained, and free. You can review the rules, adopt them wholesale, or cherry pick or customize them if needed. It’s definitely worth your time to check it out on GitHub: https://github.com/dev-sec

Thursday, June 2, 2016

DevOpsSec: Using DevOps to Secure DevOps

I finished writing an e-book for O'Reilly on DevOpsSec: Securing Software through Continuous Delivery. It explains how to wire security into Continuous Delivery, and how to use Continuous Delivery and programmable Infrastructure as Code and other DevOps practices to build and operate more secure systems. It is based on approaches followed by organizations like Etsy, Netflix, LMAX, Amazon, Intuit, Google, and others, including my own firm.

The e-book is available for free download at: http://www.oreilly.com/webops-perf/free/devopssec.csp. I'd appreciate feedback and corrections.

Monday, April 18, 2016

DevOpsDays: Empathy, Scaling, Docker, Dependencies and Secrets

Last week I attended DevOpsDays 2016 in Vancouver. I was impressed to see how strong the DevOps community has grown from the time that I attended my first DevOpsDays event in Mountain View in 2012. There were more than 350 attendees, all of them doing interesting and important work.

Here are the main themes that I followed at this conference:

Empathy – Humanizing Engineering and Ops

There was a strong thread running through the conference on the importance of the human side of engineering and operations, understanding and empathizing with people across the organization. There were two presentations specifically on empathy: one from an engineering perspective by Joyent’s Matthew Smillie, and another excellent presentation on the neuroscience of empathy by Dave Mangot at Librato, which explained how we are all built for empathy and that it is core to our survival. There was also a presentation on gender issues, and several breakout sessions on dealing with people issues and bringing new people into DevOps.

Another side to this was how we use tools to collaborate and build connections between people. More people are depending more on – and doing more with – chat systems like HipChat and Slack to do ChatOps. Using chat as a general interface to other tools, leveraging bots like Hubot to automatically trigger and guide actions, such as tracking releases and handling incidents.

In some organizations, standups are being replaced with Chatups, as people continue to find new ways to engage and connect with other people working remotely and inside and outside of teams.

Scaling DevOps

All kinds of organizations are dealing with scaling problems in DevOps.

Scaling their organizations. Dealing with DevOps at the extremes, at really large organizations and figuring out how to effectively do DevOps in small teams.

Scaling Continuous Delivery. Everyone is trying to push out more changes, faster and more often in order to reduce risk (by reducing the batch size of changes), increase engagement (for users and developers), and improve the quality of feedback. Some organizations are already reaching the point where they need to manage hundreds or thousands of pipelines, or optimize single pipelines shared by hundreds of engineers, building and shipping out changes (or newly baked containers) several times a day to many different environments.

A common story for CD as organizations scale up goes something like this:

  1. Start out building a CD capability in an ad hoc way, using Jenkins and adding some plugins and writing custom scripts. Keep going until it can’t keep up.
  2. Then buy and install a commercial enterprise CD toolset, transition over and run until it can’t keep up.
  3. Finally, build your own custom CD server and move your build and test fleet to the cloud and keep going until your finance department shouts at you.
Scaling testing. Coming up with effective strategies for test automation where it adds most value – in unit testing (at the bottom of the test pyramid), and end-to-end system testing (at the top of the pyramid). Deciding where to invest your time. Understanding the tools and how to use them. What kind of tests are worth writing, and worth maintaining.

Scaling architecture. Which means more and more experiments with microservices.

Docker, Docker, Docker

Docker is everywhere. In pilots. In development environments. In test environments especially. And more often now, in production. Working with Docker, problems with Docker, and questions about Docker came up in many presentations, break outs and hallway discussions.

Docker is creating new problems at the start and end of the CD pipeline.

First, it moves configuration management upfront into the build step. Every change to the application or change to the stack that it is built and runs on requires you to “bake a new cake” (Diogenes Rettori at Openshift) and build up and ship out a new container. This places heavy demands on your build environment. You need to find effective and efficient ways to manage all of the layers in your containers, caching dependencies and images to make builds run fast.

Docker is also presenting new challenges at the production end. How do you track and manage and monitor clusters of containers as the application scales out? Kubernetes seems to be the tool of choice here.

Depending on Dependencies

More attention is turning to builds and dependency management, managing third party and open source dependencies. Identifying, streamlining and securing these dependencies.

Not just your applications and their direct dependencies – but all of the nested dependencies in all of the layers below (the software that your software depends on, and the software that this software depends on, and so on and so on). Especially for teams working with heavy stacks like Java.

There was a lot of discussion on the importance of tracking dependencies and managing your own dependency repositories, using tools like Archiva, Artifactory or Nexus, and private Docker registries. And stripping back unnecessary dependencies to reduce the attack surface and run-time footprint of VMs and containers. One organization does this by continuously cutting down build dependencies and spinning up test environments in Vagrant until things break.

Docker introduces some new challenges, by making dependency management seem simpler and more convenient, and giving developers more control over application dependencies – which is good for them, but not always good for security:

  • Containers are too fat by default - they include generic platform dependencies that you don’t need and - if you leave this up to developers - developer tools that you don’t want to have in production.
  • Containers are shipped with all of the dependencies baked in. Which means that as containers are put together and shipped around, you need to keep track of what versions of what images were built with what versions of what dependencies and when, where they have been shipped to, and what vulnerabilities need to be fixed.
  • Docker makes it easy to pull down pre-built images from public registries. Which means it is also easy to pull images that are out of date or that could contain malware.
You need to find a way to manage these risks without getting in the way and slowing down delivery. Container security tools like Twistlock can scan for vulnerabilities, provide visibility into run-time security risks, and enforce policies.

Keeping Secrets Secret

Docker, CD tooling, automated configuration management tools like Chef and Puppet and Ansible and other automated tooling create another set of challenges for ops and security: how to keep the credentials, keys and other secrets that these tools need safe. Keeping them out of code and scripts, out of configuration files, and out of environment variables.

This needs to be handled through code reviews, access control, encryption, auditing, frequent key rotation, and by using a secrets manager like Hashicorp’s Vault.

Passion, Patterns and Problems

I met a lot of interesting, smart people at this conference. I experienced a lot of sincere commitment and passion, excitement and energy. I learned about some cool ideas, new tools to use and patterns to follow (or to avoid).

And new problems that need to be solved.

Site Meter